Tag Archives: case

See How Dynamic Text on a Landing Page Helped Increase Conversions by 31.4% [A/B Test Reveal]

a/b testing with ConversionLab

Pictured above: Rolf Inge Holden (Finge), founder of ConversionLab.

Whether your best ideas come to you in the shower, at the gym, or have you bolting awake in the middle of the night, sometimes you want to quickly A/B test to see if a given idea will help you hit your marketing targets.

This want to split test is real for many Unbounce customers, including Norway-based digital agency ConversionLab, who works with client Campaign Monitor.

Typically this agency’s founder, Rolf Inge Holden (Finge), delivers awesome results with high-performing landing pages and popups for major brands. But recently his agency tried an experiment we wanted to share because of the potential it could have for your paid search campaigns, too.

The Test Hypothesis

If you haven’t already heard of San-Francisco based Campaign Monitor, they make it easy to create, send, and optimize email marketing campaigns. Tasked with running especially effective PPC landing pages for the brand, Finge had a hypothesis:

If we match copy on a landing page dynamically with the exact verb used as a keyword in someone’s original search query, we imagine we’ll achieve higher perceived relevance for a visitor and (thereby) a greater chance of conversion.

In other words, the agency wondered whether the precise verb someone uses in their Google search has an effect on how they perceive doing something with a product, and—if they were to see this exact same verb on the landing page— whether this would increase conversions.

In the case of email marketing, for example, if a prospect typed: “design on-brand emails” into Google, ‘design’ is the exact verb they’d see in the headline and CTAs on the resulting landing page (vs. ‘build’ or ‘create’, or another alternative). The agency wanted to carry through the exact verb no matter what the prospect typed into the search bar for relevance, but outside the verb the rest of the headline would stay the same.

The question is, would a dynamic copy swap actually increase conversions?

Setting up a valid test

To run this test properly, ConversionLab had to consider a few table-stakes factors. Namely, the required sample size and duration (to understand if the results they’d achieve were significant).

In terms of sample size, the agency confirmed the brand could get the traffic needed to the landing page variations to ensure a meaningful test. Combined traffic to variant A and B was 1,274 visitors total and—in terms of duration—they would run the variants for a full 77 days for the data to properly cook.

To determine the amount of traffic and duration you need for your own tests to be statistically significant, check out this A/B test duration calculator.

Next, it was time to determine how the experiment would play out on the landing page. To accomplish the dynamic aspect of the idea, the agency used Unbounce’s Dynamic Text Replacement feature on Campaign Monitor’s landing page. DTR helps you swap out the text on your landing page with whatever keyword a prospect actually used in their search.

Below you can see a few samples of what the variants could have looked like once the keywords from search were pulled in (“create” was the default verb if a parameter wasn’t able to be pulled in):

A/B test variation 1
A/B test sample variation

What were the results?

When the test concluded at 77 days (Oct 31, 2017 —Jan 16, 2018), Campaign Monitor saw a 31.4% lift in conversions using the variant in which the verb changed dynamically. In this case, a conversion was a signup for a trial of their software, and the test achieved 100% statistical significance with more than 100 conversions per variant.

The variant that made use of DTR to send prospects through to signup helped lift conversions to trial by 31.4%

What these A/B test results mean

In the case of this campaign, the landing page variations (samples shown above) prompt visitors to click through to a second page where someone starts their trial of Campaign Monitor. The tracked conversion goal in this case (measured outside of Unbounce reporting) was increases to signups on this page after clicking through from the landing page prior.

This experiment ultimately helped Campaign Monitor understand the verb someone uses in search can indeed help increase signups.

The result of this test tell us that when a brand mirrors an initial search query as precisely as possible from ad to landing page, we can infer the visitor understands the page is relevant to their needs and are thereby more primed to click through onto the next phase of the journey and ultimately, convert.

Message match for the win!

Here’s Finge on the impact the test had on the future of their agency’s approach:

“Our hypothesis was that a verb defines HOW you solve a challenge; i.e. do you design an email campaign or do you create it? And if we could meet the visitor’s definition of solving their problem we would have a greater chance of converting a visit to a signup. The uplift was higher than we had anticipated! When you consider that this relevance also improves Quality Score in AdWords due to closer message match, it’s fair to say that we will be using DTR in every possible way forwards.”

Interested in A/B testing your own campaigns?

Whether you work in a SaaS company like Campaign Monitor, or have a product for which there are multiple verbs someone could use to make queries about your business, swapping out copy in your headlines could be an A/B test you want to try for yourself.

Using the same type of hypothesis format we shared above, and the help of the A/B testing calculator (for determining your duration and sample size), you can set up some variants of your landing pages to pair with your ads to see whether you can convert more.

ConversionLab’s test isn’t a catch all or best practice to be applied blindly to your campaigns across the board, but it could inspire you to try out Dynamic Text Replacement on your landing pages to see if carrying through search terms and intent could make a difference for you.

View this article: 

See How Dynamic Text on a Landing Page Helped Increase Conversions by 31.4% [A/B Test Reveal]

Prototyping An App’s Design From Photoshop To Adobe XD

(This is a sponsored article.) Designing mobile apps means going through different stages: pre-planning, visual concepts, designing, prototyping, development. After defining your project, you need to test how it will work before you begin to develop it.

This stage is captured by prototyping. Prototyping allows designers to get a feel for the functionality and flow of an app, and to preview screens and interactions. Testing with prototypes provides valuable insights into user behavior and can be used to validate the interaction model. It is possible to represent the interactivity of an app before its development, and this gives developers a global vision of an app’s functioning, user behavior and steps to afford.

Prototyping is the simulation of the final result of an app’s development. Through this step, it’s possible to show a workflow of an app and consider problems and solutions. The two fundamental roles who will work in this phase are the user interface (UI) designer, who creates the look and feel, and the user experience (UX) designer, who creates the interaction structure between elements.

There are many ways to design and create an app’s look. As a loving user of Adobe products, I work most of the time in Illustrator and Photoshop. Illustrator helps me when I create and draw UI elements, which I can simply save and use later with Adobe XD. The process is the same as I’ve done for icons and that I showed you in my previous article “Designing for User Interfaces: Icons as Visual Elements for Screen Design.”

Photoshop comes in handy when I have to work with images in UI. But that’s not all: With the latest Adobe XD release, we can bring Photoshop design files into XD very quickly, and continue prototyping our app.

Today, I’ll offer a tutorial in which we discover how to transfer our app’s design from Photoshop to XD, continuing to work on it and having fun while prototyping. Please note that I’ve used images from Pexels.com in order to provide examples for this article.

We will cover the following steps:

  1. Simple hand sketch,
  2. Designing In Photoshop,
  3. Importing PSD files to XD,
  4. Developing a prototype,
  5. Tips.

For Adobe tools, I will use Photoshop CC, Illustrator CC and XD CC — all in the 2018 versions.

Let’s get started!

1. Simple Hand Sketch

Before we start designing our app, we need a plan for how to go about it. There are some questions we have to answer:

  • What is the app for?
  • What problem does it solve?
  • How easy is it to use?

Let’s assume we want to create an app for recipes. We want something simple: a space for pictures with ingredients and recipes.

I sketched by hand what I have in mind :



Then, I grabbed Photoshop and created my layouts.

2. Designing In Photoshop

Before we create layouts for our app, we can take advantage of a very useful resource by Adobe: free UI design resources. Because we will be designing an iOS app, I downloaded the iOS interface for Photoshop.

Feel free to experiment with the layouts you’ve downloaded.

In Photoshop, I created a new blank document from a preset for the iPhone 6 Plus:



Below is our layout, as I designed it in Photoshop. I tried to reproduce what I drew by hand earlier.









The PSD file contains four artboards. Each has its own layers.

Note: The images used in this prototype are from Pexels.com.

Let’s see how to import this PSD file into Adobe XD.

3. Importing PSD Files To Adobe XD

Let’s run Adobe XD and click on “Open.” Select our PSD file, and click “Open.”



Ta-dah! In a few seconds, you’ll see all of your PSD elements open in XD.



More importantly, all of the elements you just imported will be organized exactly as they were in Photoshop. You can see your artboards on the left:



When you select an artboard, you will see its layers on the left — exactly the way it was in Photoshop before exporting.



Let’s do something in XD to improve our layout.

Go to Artboard 3. In this artboard, I want to add some more images. I just created three spaces in Photoshop to get an idea of what I want. Now, I can add more with some very simple steps.

First, delete the second and third image. From the first rectangle, delete the image, too, by double-clicking on it. You’ll have just one rectangle.



With this rectangle selected, go to “Repeat Grid” on the right and click it. Then, grab the handle and pull it downward. The grid will replicate your rectangle, inserting as many as you want. Create six rectangles, and adjust your artboard’s height by double-clicking on it and pulling its handle downwards.





Now, select all of the images you want to put in rectangles, and drag them all together onto the grid you just created:

Et voilà! All of your pictures are in place.

Now that all of the layouts are ready, let’s play with prototyping!

4. Developing A Prototype

Let’s begin the fun part!

We have to create interactions between our artboards and elements. Click on “Prototype” in the top left, as shown in the image below.



Click on the “Start here” button. When the little blue arrow appears, click and drag it to the second artboard. We are connecting these two artboards and creating interaction by clicking the intro button. Then, you can decide which kind of interaction to use (slide, fading, time, etc.).

See how I’ve set it in the image below:



Scrolling tip: Before viewing our prototype preview, we need to do another important thing. We have to make our artboards scrollable, giving them the same effect as when pushing a screen up and down with a finger on the phone.

Let’s go back a step and click on “Design” in the top left. Check which artboards are higher — in this case, the third and fourth. Select the third artboard from the left, and you’ll see the section “Scrolling” on the right. Set it to “Vertical.”

Then, you’ll see that your “Viewport Height” is set to a number, higher than the original artboard’s size. That’s normal, because we made it higher by adding some elements. But to make our artboard scrollable, we need to set that value to the original artboard’s size — in this case, 2208 pixels, the height of the iPhone 6 Plus, which we set in Photoshop and then imported to XD.

After setting that, you’ll see a dotted line where your artboard ends. That means it is now scrollable.



To see our first interactions in action, click on “Prototype” in the top left, and then click the little arrow in the top right. See them in action below:

Let’s finish up by connecting all of our artboards, as we’ve seen before, and check our final prototype. Don’t forget to connect them “back” to the previous artboard when clicking on the little arrow to go back:



And here is the final demo:

In this tutorial, you have learned:

  • that you can design your app in Photoshop,
  • how you can bring it into Adobe XD,
  • how to create a simple prototype.

Tips

  • Decide on one primary action per screen, and highlight its containing element through visual design (e.g. a big CTA).
  • White space is very important on small screens. It prevents confusion and gives the user more clickable space. And here comes that rule again: One primary action works well with more white space.
  • If you are not on a desktop, avoid all unnecessary elements.
  • Always test your prototypes with regular users. They will help you to understand whether the flow is easy to follow.

This article is part of a UX design series sponsored by Adobe. Adobe XD is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype, and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(ra, al, il)

Source article:

Prototyping An App’s Design From Photoshop To Adobe XD

Tips For Conducting Usability Studies With Participants With Disabilities

Over the last few years, I ran several usability studies with participants with various disabilities. I thought it would help others if I shared some of my experiences.

In this article, I provide lessons learned or tips to consider in planning and executing usability testing with participants with disabilities. The lessons learned are divided into general that can apply to all types of disabilities; and lessons learned for three specific disability categories: visual, motor, and cognitive. These tips will help you regardless where you work: If you work with an established user research team where usability testing is part of your design process or if you work on your own with limited resources but want to improve the quality of the user research by expanding the diversity of participants.


Windows 10 high contrast mode of Google.com


Windows 10 high contrast mode of Google.com. (Large preview)

Background

Several of our clients from a state government agency to several fortune 500 companies came to us at the User Experience Center (UXC) for help with their websites. They wanted to make sure users with disabilities could access their site and accomplish their goals.

There are many different kinds of disabilities, however, there is a general agreement to categorize people with disability into four general categories: visual, auditory, motor (also referred to as “physical”), and cognitive. There are different conditions and much variability within each category, e.g., within visual disabilities, color blindness, low vision, and blindness. There is also a distinction as to when a disability is contracted, e.g., a person who was born blind as opposed to one who lost vision later on in life.

Furthermore, as we age or encounter unique situations (such as multi-tasking), we may have a similar experience to people we think of as disabled. Therefore, disabilities should be thought of as a spectrum of abilities that should be accounted for during the design of all user interfaces and experiences.

Typically, in order to ensure that disabled people can use their digital products and services, companies aim for compliance with accessibility guidelines such as the Web Content Accessibility Guidelines (WCAG 2.0). While this is critical, it is also important to have users with disabilities try to accomplish real tasks on the site in usability testing. There may be gaps in the overall user experience…

Think about the typical doors found in buildings. How many times have you tried to open a door one way and realized they actually open the other, for example, push instead of pull. Technically the door is accessible, but it is usable?


Example of doors that are technically accessible, but not usable


Example of doors that are technically accessible, but not usable (handles give impression you should push but must pull to open). (Large preview)

Even if a site follows the accessibility guidelines and is technically accessible, users may not be able to accomplish their goals on the site.

Lesson Learned

In most ways, usability testing with this segment of the population is no different than testing with anyone else. However, there are several areas you need to pay just a bit more attention to so your sessions run smoothly. The lessons or tips are broken down into general ones that can apply to all participants and specific tips for various disability types such as visual, motor, and cognitive.

General Lessons Learned

1. Ensure a baseline level of accessibility before usability testing

Ensure a baseline level of accessibility before usability testing: Planning usability testing, especially recruiting participants can take time both for the project team and the recruited participants.

Two good examples of basic accessibility issues that should be addressed prior to usability testing are:

  • Missing alternative (alt) text.
    Usability testing can be used to see if the alt text used is appropriate and makes sense to participants, but if all the participants are doing is confirming that the alt text is missing then this is not a good use of their time.
  • Appropriate color contrast.
    All page designs should be reviewed beforehand to make sure all foreground and background colors meet WCAG 2.0 AA color contrast ratios.
2. Focus the recruiting strategy

If you work with an external recruiter ask them if they have experience recruiting people with disabilities; some do. If you are recruiting internally (without an external recruiter), you may need to reach out to organizations that have access to people with disabilities. For example, if you need to recruit participants with visually disabilities in the United States, you should contact a local chapter of the National Federation of the Blind (https://nfb.org/state-and-local-organizations) or a local training center such as the Carroll Center for the Blind in Massachusetts (http://carroll.org/). If you use social media to advertise your study, a good approach is to use the hashtag #a11y (stands for accessibility — there are 11 letters between the “a” and “y”) in your post.

3. Bring their own equipment/assistive technology

Allow and encourage participants to bring their own equipment such as their own laptop, especially if they use assistive technology. This way, you can truly see how people customize and use assistive technology.

4. Have a backup plan for assistive technology

As stated above in #3. It is best if participants can bring their own equipment. However, it is always wise to plan for the worst, for example, if a participant does not bring their equipment or if there is a technical problem such as you can’t connect their equipment to your Wi-Fi network. In the case of visually impaired participants, install assistive technology (AT) such as screen reader software they will be bringing in on a backup PC. For many of the AT software packages, you can get a free trail that should cover you for the usability testing period. This has saved us several times. Even though the configuration was different than what the participants had, we were able to run the session. Participants were quickly able to go into the settings and make some adjustments (e.g., increase the speech rate) and get started with the session.

5. Allow additional time

Provide additional time in-between sessions. Typically we like to reserve 30 minutes between participants. However, when participants plan to bring in their own equipment additional time may be required for setting up and resolving any issues that may arise. When testing with individuals with disabilities, we schedule an hour between sessions, so we have extra time for setting up assistive technology and testing it.

6. Confirm participant needs

Either with the recruiting screener or via email or telephone, confirm what equipment participants will bring in and need to be supplied beforehand. In our lab, we can connect external laptops (that in this case, were outfitted with special accessibility software and settings) to our 1Beyond system via an HDMI cable. In a recent study, all of our participants’ laptops had HDMI ports. However, we forgot to check this beforehand. This is an example of a small but important thing to check to prevent show-stopping issues at the time of the test.

7. Consider additional cost

Depending on the disability type transportation to the usability testing location may add additional burden or cost. Consider the cost of transportation in the incentive amount. If feasible, consider providing an extra $25-$40 in your incentive amount so participants can take a taxi/Uber/Lyft, etc. to and from your location. Depending on access to public transportation and taxi/ride-sharing rates in your area the amount may vary. Our participants came to the UXC in different ways — some more reliable and timely than others.

8. Revise directions

Check the directions you provide for accessibility. Make sure they include an accessible path into your building. Test them out beforehand. Do you need to provide additional signage? If so, ensure all signs are clear, concise, and use plain-language directions.

9. Review the emergency evacuation plan

Review the plan in the event of a fire or other emergency. Map out the emergency evacuation plan in advance.

10. Consider logistics

Consider remote usability testing as an option. One of the benefits of bringing individuals with disabilities into the lab for usability testing is observing first-hand participants’ use of the product or website in question. However, the logistics of getting to your location may be just too much for participants. If it’s possible to test remotely (we typically do this through Zoom or GoToMeeting), it should be considered. This poses the additional challenge of making sure your process for capturing the remote session is compatible with all of the participant’s assistive technology, as well as accessible itself. Troubleshooting remotely is never fun and could be more difficult with this segment of the population.

11. Hearing impaired participants

Some participants may have a hearing impairment where the position of the moderator and participant is critical for adequate communication. In the case of hearing-impaired participants, it is important to get their attention before talking to them and also to take turns when engaging in conversation.

To get the most of this research, it is best if the participants are not discovering basic accessibility issues that should have been discovered during an accessibility review and/or testing.

Lessons Learned For Participants With Visual Disabilities

Participants with visual disabilities range from people who are blind and use screen readers such as JAWS, to people that need to the text or the screen to be enlarged using software such as ZoomText or relying on the native screen enlargement in the browser. People that are color-blind also fall into this category.

  • For any documents needed prior to the study such as the consent form, send via email beforehand and ask them review and send back in lieu of a physical signature. If you don’t, be prepared to read aloud the consent form and assist in signing the documents for some participants.
  • Make sure directions provide step-wise directions; do not rely only on graphical maps as these may not be accessible.
  • For all documents, make sure color is not used as the sole cue to convey information. Print out all documents on a black and white printer to make sure color is not required to make sense of the information.
  • Get participants mobile phone numbers in advance and meet them at their drop-off point. Be prepared to guide them to the testing location. Review best practice for guiding blind individuals:
  • While Braille documents can be helpful for participants that read Braille, the time and cost involved may not be feasible. Furthermore, all blind people do not read Braille, especially people that have lost sight later in life. It is best to make sure all documents can be read via a screen reader. Unless you are sure if there are no accessibility issues avoid PDF documents and send out simple Word documents or text-based emails.
  • If participants bring guide dogs do not treat them as pets, they are working. Provide space for the dog and do not pet it unless the participant gives you permission.
  • Make sure to explain beforehand any sounds or noise that are or may be present in the room such as unique audio from recording software. This may avoid the participant from becoming startled or confused during the session.
  • Initially when I started to work with blind participants I was worried my choice in language might offend. However, over the years I have learned that most blind participants are fairly relaxed when it comes to speech. Therefore, during moderation do not be afraid to use phrases such as “see” or “look” and similar words when talking to blind participants; for example, “please take a look at the bottom of the page” or “what do you see in the navigation menu?” In my experience, blind participants will not be offended and will understand the figurative meaning rather than the literal meaning.
  • Test out all recording equipment/processes beforehand. Ensure all audio including both human speech in the room and audio/speech from AT such as screen readers will be recorded correctly. During testing of the equipment adjust the locations of the microphones for optimal recording.

Lessons Learned For Participants With Motor Disabilities

Motor disabilities refer to disabilities that affect the use of arms or legs and mobility. These individuals may need to use a wheelchair. Some people may not have full use of their hands or arms and cannot use a standard mouse and keyboard. These people may need to voice recognition software which allows to use voice input or use a special pointing device, for example, one that is controlled by their mouth.

  • In the directions, make sure the route is accessible and routes them via elevators rather than stairs. Also, if participants are driving note the location of accessible parking.
  • Note if doors have accessible door controls. If not you may need to meet the participant and guide them to the testing location.
  • Make a note of the nearest accessible restrooms to the testing location.
  • As with all participants with disabilities, it is best if they can bring in their own laptop with their assistive technology software installed and any other required assistive technology. However, in the case of participants (such as Adriana in Figure 3) that use voice recognition software such as Dragon Naturally Speaking this is critical because they have trained the software to recognize their voice.
  • Make sure the desk or table where the participant will be working can accommodate a wheelchair and the height is adjustable. According to the American with Disabilities Act (ADA), conference tables must be 27 inches high in order to accommodate knee clearance for individuals in wheelchairs..

Adriana Mallozzi conducting a usability test at the User Experience Center. Picture-in-Picture view of the computer page taking up most of the screen and small video feed of Adriana n the lower right-hand corner


Adriana Mallozzi conducting a usability test at the User Experience Center. Adriana has a motor disability (cerebral palsy) which affects her entire body. She uses a wheelchair, a sip-and-puff joystick that acts as a mouse, along with Dragon Naturally Speaking. (Large preview)

Lessons Learned For Participants With Cognitive Disabilities

Individuals with these disabilities cover a wide range of relatively mild learning disabilities such as Dyslexia to individuals with a more profound cognitive disability such as Down syndrome. In general, people with cognitive disabilities have challenges with one or more mental tasks. Rather than looking at specific clinical definitions it best to consider functional limitations in key areas such as memory, problem-solving, attention, reading or verbal compensation. Consider how best to accommodate participants during usability testing. Many of the tips below should also apply to all participants, however for this group you need to be extra aware.

  • Sometimes participants will be accompanied by a caretaker or an aide. This person may assist with transportation or may need to be present with the participant during the usability test. If the caretaker is present during the usability test, make sure they understand the structure of the usability test and what will be required of the participant. If you know the participant will be accompanied before the study, you review the goals and protocol prior to arrival via email or phone. That is as much as possible the participant should be one conducting the usability testing, and the caretaker should not be involved unless it is completely necessary.
  • In some cases, the caretaker or aide may act like an interpreter. You may need to communicate with this interpreter in order to communicate with the participant. If this is the case, make sure you record the audio coming from both the participant and the interpreter.
  • Provide instructions in multiple modalities, for example, both written and verbal. Be patient and be prepared to repeat the task or ask the same question multiple times.
  • Be prepared to break tasks into smaller sub-tasks to support memory/attention challenges or fatigue that may set in.
  • Ideally, it is best to be consistent with tasks for all participants however for some participants with cognitive disabilities you should be prepared to go off-script or modify tasks on the fly if the current approach is not working.
  • Have the participant’s comfort and well-being the number one priority at all times. Don’t be afraid to take multiple breaks or end the session early if things are just not working out or the participant is not comfortable.



Amazon.com home page zoomed in at 250%. (Large preview)

Additional Insights

The tips above should serve as guidelines. Each participant is unique and may require various accommodations depending on their situation. Furthermore, while some of the tips are categorized for specific disability types, specific individuals may have multiple disabilities and/or benefit from a tip from a different category than their primary disability.

If you or your company have conducted user or customer research, you know the value of gathering feedback about the issues and benefits of products and systems. Testing with individuals with disabilities is no different, as you learn many insights that you would not gain otherwise. However, an additional takeaway for us was the realization that people use assistive technologies in different ways. The following example is specific to people with visual disabilities, but there are similar examples across all groups.

An assumption might be someone that is blind only uses a screen reader such as JAWS and is an expert at it. We found that people with visual impairments actually differ greatly in the level of support needed from assistive technology.

  • Some users need a screen reader for accessing all content.
  • Some users (with more sight/with low vision) only need to enlarge content or invert page colors to increase contrast.
  • Others may need a combination of approaches. One visually impaired participant used both a screen reader along with the zoom function embedded in the web browser. She only used a screen reader for large paragraphs of text, but otherwise simply zoomed in with the web browser and got very close to the screen when navigating around the website.

Furthermore, just like anyone, all users are not experts on the software they use. While some users would consider themselves experts, some only learn enough about the software to accomplish what they need and no more.

Moving Forward

Hopefully you have learned some useful information that will help you include more diversity into your usability testing. However, since there is variability with different disabilities, this may seem overwhelming. I recommend starting small; for example by including one or two participants with disabilities as part of a larger group of 5 to 10 participants. In addition, initially bring in someone that has both experience with usability testing and a lot of experience with their assistive technology so you can focus on getting their feedback rather than how the usability testing process works or their use of their assistive technology.

Acknowledgements

I would like to thank Jocelyn Bellas, UX Researcher at Bank of America and Rachel Graham, UX Researcher at Amazon. When Rachel and Jocelyn worked at the User Experience Center as Research Associates in 2016, they worked with me on some of the projects referenced in this article and also contributed to a related blog post on this topic.

References

Smashing Editorial
(cc, ra, yk, il)

View this article: 

Tips For Conducting Usability Studies With Participants With Disabilities

Thumbnail

Stop Making These Common Mistakes with Your Website Popups (Includes Examples and Quick Fixes)

Depending on who you talk to, website popups are either a godsend for list building and subsequent revenue creation, or they’re a nuclear bomb for the user experience.

Some can’t stand popups and completely disregard sites that use them (or that’s what they say, at least). And there are even entire websites dedicated to hating on especially bad popups.

However, many marketers are fully charmed to their capabilities for revenue generation, lead collection, and driving attention and conversions in general.

It doesn’t have to be an either/or situation, though.

You can create website popups that aren’t detrimental to the user experience; In fact, if you do it really well, you can even improve the user experience with the right offer and presentation.

We all want to be companies that care a lot about our visitors and make the best popups possible, so it goes without saying, we care about timing, targeting, and triggering (i.e. who we send offers to, when we send them, and what those offers are). After all, the main reasons visitors get annoyed by popups are 1) when they disrupt the user experience and 2) when they offer no value or help:

Fortunately, you can easily solve for these things. In this article I’ll outline common website popup mistakes with real examples, and I’ll cover a few ways to remedy these mistakes.

Mistake 1: Poor timing

One of the biggest mistakes marketers make with website popups is with timing. It’s almost always the case that we trigger popups too soon (i.e. right away, no matter the context of the page or visitor).

On an Inbound.org discussion, Dustin J. Verburg had this to say:

“The most hilarious popups are the ones that say ‘LOVE THIS CONTENT? SUBSCRIBE FOR MORE’ because they assault my eyes before I even read two words of the article.

Now I guess I’ll never know if I love the content, because I close the tab immediately and never come back.”

Similar to Dustin, imagine you’re taking break from work to check out GrowthHackers. You find an article on the front page that looks interesting. You open it and immediately get this:

Woah, what’s this full screen takeover? I know this is common today, but most people are jarred by this experience.

Now you may not even remember what the article was, so you’re likely to click away and go back to actual work.

One possible way to remedy this – just spitballing here – could be to add some copy explaining that the visitor needs to click to continue on to the article. Forbes does this (though Forbes could never claim a good user experience without a good laugh):

At least you know where you’re at (the logo is prominent) and what to do (continue to site). But, it goes without saying, Forbes’ experience is not ideal so don’t copy it.

So how do you fix poor timing?

The best possible solution for user experience is to trigger a popup at a time that actually benefits a visitor. On a long-form blog article, this is usually at some point of strong user engagement, either measured by time on site or, better, by scroll-depth and content engagement.

You can do this with an on-scroll popup created in Unbounce.

Once you’re happy with your design, simply set your trigger for when someone scrolls through a certain percentage of the page, or even after a delay you specify:

Click above for a larger, clearer image.

Overall, poor timing is a common problem, and it’s almost never intentional. We simply act hastily when setting up popups, or we spend all of our time crafting the offer and forget that when the offer is shown matters too.

I want to point out, however, that it’s not always a bad decision to throw a popup at visitors on arrival. It’s all about context.

For example, if you’re shopping for clothes, there are a million options available. Therefore, it’s imperative for ecommerce shops to grab your attention as quickly as possible with an attractive offer. This is why you see so many website popups with discounts on arrival on ecommerce sites, like this one from Candle Delirium:

As well as this one from BustedTees:

It’s a very common tactic. We’ll go over it specifically in regard to ecommerce later in section three.

In general, it’s important to analyze a visitor’s behavior and trigger the popup at the exact moment (or as close to it as possible) that someone would want to subscribe/download your offer/etc. It’s a lot of work to tease out when this may be, but the analysis is worth it as you’ll annoy fewer visitors and convert more subscribers or leads.

Fix annoying timing: Consider the user experience. Does it warrant an on-arrival popup? If not, what’s the absolute ideal timing for a popup, based on user intent, behavior, and offer?

Mistake 2: Poor targeting

Poor targeting is a broad problem that’s usually made up of a mismatch between who you’re targeting and what offer you’re sending (though, you could also add in when you’re targeting them as a variable as well).

For instance, if you’re targeting a first time organic visitor to a blog post with a popup that announces a new product feature, you may spur some confusion. Rather, you should try to target based on appropriate user attributes, as well as within the context of where they are in the user journey. A better offer for a first time blog visitor might be an ebook or email course on a topic related to the blog post.

An example of poor targeting is LawnStarter’s guide on their post about where new residents of Birmingham are moving from. It’s a cool infographic-based guide they’re offering up, but the popup is really irrelevant to the content of the post someone’s currently reading in this case:

In another, better example, Mailshake has a massive guide on cold emailing, which would be a daunting read in a single session. It’s probably appropriate, then, that they offer the book up for download via a sticky bar at the bottom of a related article:

There are ways they could improve copy, design, or the offer itself, but the core point is that their targeting is spot on (i.e. after someone’s reading something about cold emailing, and offered up as added, downloadable value).

Now, if I already visited this page and downloaded the playbook, and they still hit me with this offer, then we’d have a targeting problem. They could use the fact that I’m a repeat visitor, as well as a subscriber already, to target me with a warmer offer, such as a deeper email course, a webinar, or possibly even a consultation/demo depending on their sales cycle and buyer’s journey.

The fix for poor targeting

Remember with targeting, you’re simply trying to align your offer with your visitor and where they are in their awareness and interest of your company and product.

This is where the value of progressive profiling comes in. But if you’re not doing that, at the very least you should be aligning the offers on your page with the intent of the traffic on that page.

You can also target offers based on URLs, location, referral source, and cookies. Really think about who is receiving your offer and at what point in the customer journey before you set a popup live.

With popups created in Unbounce, for example, you can use referral source as a way to target appropriate offers to someone who’s come from social traffic, vs. someone who’s arrived via AdWords traffic:

Simply create your popup, and in advanced targeting, select which referral sources you’d like to have access to the offer:

Fix targeting the wrong people at the wrong time with the wrong offer Analyze your customer journey and intent levels on content. Craft offers according to customer journey status as well as on-site user behavior.

Mistake 3: Offers with no obvious value

How many times have you been on a blog that simply wants you to sign up for a mailing list, no value promised or given? Like this:

If you’re an active reader of the blog, maybe this works. After all, you already know the value of the content and simply want to sign up for updates. Makes sense. But I’d wager this type of active reader is a small percentage of traffic, and these people will sign up however they can. Thereby the popup isn’t useful for everyone else.

As we covered before, a much better way to capture attention is with a discount, like Allen Edmonds offers here as soon as I land on the site (on another note, this is a great use of an immediate triggering. It’s not an annoying popup when it delivers me a discount).

This is a super common ecommerce tactic.

It’s a competitive world out there, and giving an immediate hit in the form of a discount is a good way to capture some of that oh so valuable attention. It’s especially common when used on first time visitors to the homepage, as a homepage visitor’s experience is generally more variable and less intent-based (if they land on a product page from a search ad, it’s a bit of a different story).

Here’s an example from Levi’s:

The fact that most ecommerce sites have similar messages nowadays is indicative of a creativity problem, one that presents itself to marketers in any industry. We look to competitors and to the consensus and think that we can’t fall behind, so we replicate tactics.

However, I’m more interested in sites, like Four Sigmatic, that push beyond and implement a creative offer, like their lottery style subscription featured below. (This is one of the only popups I’ve signed up for in months, by the way):

Offering up poor or no value is really the least forgivable mistake if you’re a marketer. Crafting offers that align to your buyer persona is your job. Also, it’s fun. If you have a bland offer, this could easily be the biggest opportunity for lifting conversions, as well as improving the user experience (no one is complaining about awesome offers).

Foot Cardigan does a really good job of offering value and conveying it in a fun way too:

Triggering popups with zero value? Think about ways you can give massive value to your site visitors, so much that they really want to give you their email, and create an offer for this.

Mistake 4: Poor design

If you use Unbounce Popups, it’s almost hard to create an ugly one. Still though, the internet is filled with eye-sore examples:

Design matters. A poorly designed website element can throw off your whole brand perception, which is important in creating trust, value, and in easing friction.

As Ott Niggulis put it in a ConversionXL article:

“Success in business online is all down to trust. You either see something that makes you trust a vendor or you don’t. Trust is also directly linked to conversions – if people leave your website because it’s so badly designed that it makes you seem untrustworthy then you’re missing out on lost prospects, customers, sales, and profits.

Good design = trust = more conversions = more money in your pocket. It’s as easy as that.”

That same article cites a study where 15 participants were directed to Google health information that was relevant to them, then they were asked about their first impressions of the sites.

Out of all the factors mentioned for distrusting a website, 94% were design related. Crazy!

So don’t just put up a poorly designed popup thinking the message will be the focus. Put some effort into it.

Of course, you don’t always need to look like a luxury brand. If cheap spartan is your schtick, then it can work for you. After all, Paul Graham’s site isn’t pretty but it’s so, so valuable:

Image of Paul Graham’s site.

As Aurora Bedford from NN/g explains it, it’s more about matching design to your brand values and objectives:

“The most important thing to remember is that the initial perception of the site must actually match the business — not every website needs to strive to create a perception of luxury and sophistication, as what is valuable to one user may be at complete odds with another.”

No matter what your brand positioning may be, however, make sure you clean up obvious design mistakes before hitting publish.

Fix up bad design: Spend a few hours longer designing your popup, hire a designer, or use a tool like Unbounce with a template.

Mistake 5: Poor Copy

Presenting your offers with clear copy is huge. Most copywriting, not just on popups but online in general, is:

  • Boring
  • Vague
  • Confusing
  • Cringe-inducing

…in that order, I’d wager. Not often do you find crisp, clear, and compelling copy (unless it was whipped up by a professional, of course).

As with the example below, you’re more likely to find copy that’s vague (how many ebooks, which ones, etc.) and cringe-inducing (Rocking with a capital R is pretty goofy):

The copy you write for your popup may be the most effective mechanism you have for converting visitors (outside of the targeting rules). Here’s how Talia Wolf, founder of GetUplift, put it in an Inbound.org comment:

“Many people are trying to capture your customer’s attention too so you need to give them a good reason for subscribing/not leaving.

It’s not enough to talk about yourself, you need to address the customer’s needs: one way is by highlighting the value your customer gains. The other, highlighting what they might lose. (Example: “Join thousands of happy customers” vs. “Don’t lose this unique content we’re giving our subscribers only”

Her website has a solid example of a popup with great copywriting, by the way:

Sometimes, all you need to do is pull your message to the top and make it prominent. Often we try to write clever copy instead of clear copy, but clear always beats clever.

For example, if the following popup led with the money offered for the account, it’d probably be more compelling than their current vague headline:

Mistake 6: Overload

Sometimes websites can get pretty aggressive. Here’s an experience I ran into on Brooks Brothers’ website:

One (pretty value-less) popup that I click out of, only to be followed by another one:

Now, there’s just a lot of clutter going on here. Different colors, different offers, different banners. As a first time visitor, I’m not sure what’s going on. Plus, they have animated snowfall, which adds to the clutter.

This is quite extreme, but it’s not uncommon for marketers to see some results with a popup and go overboard, triggering two, three, even four in a single session. When all of this occurs within 10 seconds of being on the site, things get annoying quickly.

Take down too many popups: Simplify and strategically target any popups on your site. They shouldn’t appear everywhere for everyone, your targeting is key.

The lesson

Popups don’t need to be annoying. Rather, they can actually add to the user experience if you put a little time and effort into analysis and creative targeting and triggering.

If you avoid the mistakes here, not only will your popups be less likely to feel intrusive, but they’ll convert better and they’ll convert the types of subscribers and leads you actually want.

Run a popup experiment of your own See Unbounce templates you can get up and running today.

Link: 

Stop Making These Common Mistakes with Your Website Popups (Includes Examples and Quick Fixes)

Real-World Examples of Show-Stopping Case Studies That Capture Attention and Close Sales

A compelling case study can be an extremely useful sales tool. Buyers love them. In fact, 78 percent of B2B buyers read case studies when researching an upcoming purchase. Why then, do so many companies fail to create compelling case studies? Well, creating a convincing case study is hard. There’s a reason why experienced copywriters charge thousands of dollars for a single case study. Case studies put your biggest benefits on display, using convincing language that connects with the core concerns of your ideal prospects. There are some common roadblocks you might run into when creating case studies. Asking something…

The post Real-World Examples of Show-Stopping Case Studies That Capture Attention and Close Sales appeared first on The Daily Egg.

Visit site:  

Real-World Examples of Show-Stopping Case Studies That Capture Attention and Close Sales

Get Better Landing Pages for AdWords with 3 Techniques to Try Today

If you’re a PPC strategist, your client’s campaigns live and die by the strength of the landing pages. If you drop the perfect paid audience on a page with no offer (or an unclear one), you’ll get a 0% conversion rate no matter how your ads perform.

The problem is that as AdWords account managers, we can be pretty limited in our ability to change landing pages. In this role, we typically lack the budget, resources, and expertise needed to affect what’s often the root cause of failing campaigns.

So how do you rescue your AdWords campaigns from bad landing pages without also becoming a landing page designer or a conversion rate optimization expert?

Below are three techniques you can use to reveal some insight, change performance yourself, or influence more relevant, better converting landing pages for AdWords.

1. Cut spend & uncover priority content with the ugly duckling search term method

Many AdWords accounts have rules that look something like this:

If the keyword spends more than $100 and doesn’t result in a sale, remove keyword.

Whether it’s automated or a manual check, the process is the same: “optimize” by getting rid of what doesn’t convert.

But this assumes that the landing page your ad points to is perfectly optimized and relevant to every keyword that might be important to your audience — a pretty tall order. But what if your target audience is searching for your offer with your seemingly “dud” keyword, and you’re driving them to an incorrect or incomplete landing page that doesn’t reflect the keyword or the search intent behind it?

The “Ugly Duckling” is a check you can do when your keyword isn’t hitting the performance metrics you want. It will help you figure out if your keyword is a swan, or a wet rat you need to purge from your aquatic friends.
Ugly duckling adwords landing page trick

As an example, let’s say your client is a fruit vendor, with an AdWords campaign driving coupon downloads. Here’s the ad group for concord grapes:

Concord Grape Ad Group

The Ad Group for Concord Grapes

The keyword phrase ‘organic concord grapes’ has a lot of search volume, but it’s performing horribly at $695 per coupon download!

An AdWord’s “rule” pausing or deleting what doesn’t work would wipe out this keyword in no time. But, before assuming a wet rat, this is where you’d take look at the (hypothetical) landing page:

the corresponding landing page

The hypothetical landing page for the fruit vendor’s Ad campaign.

The landing page never mentions your grapes are organic! No wonder your visitors aren’t converting. This is poor message match from your ad.

In this case, simply adding the high-volume, highly relevant term “organic” to your landing page is much smarter than negative matching the term your audience is using to find your product. There could be several keywords you’re bidding on that could use this swan/wet rat treatment.

Applying swan or wet rat to your AdWords landing pages

Instant wet rat: If your poor performing keyword doesn’t reflect your offer at all (ie: your grapes aren’t organic), then the keyword is a wet rat. Don’t bid on it, and consider negative matching to avoid further traffic.

Further investigation needed: Assuming your grapes are organic (or more broadly, the keyword is indeed relevant to your offer), there are several things you can try, such as:

  • Altering your ad headline: If it’s not already in there, test adding your keyword to your ad’s headline. This should drive a better quality score and cost per click, and you can see whether it affects CTR for the keyword. Because making changes to your landing page could require more rigorous review than changing ad copy, this can be a good first step.
  • Ad group break-out: If your keyword phrase is particularly long or is unrelated to the other keywords in your ad group, break it into a new ad group before including it in your headline.
  • Data-based landing page recommendation: If your keyword performance improves with the ad-specific steps above, you should now have the data you need to get your client or designer/team to feature the keyword prominently on the landing page. In the case of our example, “organic” can be easily added to the headline on the landing page.
    • In other cases, building out a separate, more specific landing page to address individual keywords could be more appropriate.
    • Depending on relevancy and search volume, you can incorporate the theme of the keyword throughout the landing page and offer.
  • Search term deep dive: Go a step further and examine the search terms, not just the keywords, following the same process. Looking at the actual search terms that do drive spend and traffic can reveal potential exclusions, match type tightening, and keywords to add.

Hypothetically, here’s what performance could look like for our keyword once we’ve optimized the ad and resulting landing page to better reflect the product:

hypothetical before and after

This keyword we were about to pause is now driving 1400+ downloads with a cost per download of the coupon. That’s below our target. Swan after all!

2. Learn about your audience with “mini-quiz” ad copy

A strong AdWords landing page isn’t just about following best practices or using slick templates. It should encompass user research, sales data, persuasive messaging, and a compelling offer, but you’ve got a trick up your sleeve: your ad copy.

Think of your ad copy as a quiz where you get to ask your audience what unique selling point is most important to them. With each ad click, you’re collecting votes for the best messaging, which can fuel key messages on your landing page.

To do this right, you have to have distinct messages and value propositions in your copy. For example, it makes no sense to run a test of these ad descriptions:

  • (Version A) Say goodbye to breakouts. #1 solution for clear skin. Try for free today!
  • (Version B) Say Goodbye to Breakouts. #1 Solution for Clear Skin. Try for Free Today!
  • (Version C) #1 solution for clear skin. Say goodbye to breakouts. Try for free today!

One of these ads will get a better click through rate than the others, but you’ve learned nothing.

A good ad copy quiz has distinct choices and results.
You’ll want to challenge assumptions about your audience. Consider this other, better version of the quiz from the text ad example above:

  • (Version A) Say goodbye to breakouts. #1 solution for clear skin. Try for free today!
  • (Version B) Get clear skin in just 3 days. Get your 1st shipment free. Order now!

Whether the winner is “#1 solution” or “Results in 3 days,” we’ve learned something about the priorities of our audience, and the learnings can be applied to improve the landing page’s headline and copy throughout. Rinse & repeat.

Turning your ads into mini-quizzes

See what your audience truly values by letting them vote with their click. Here are some ideas for value propositions to get you started with your ad copy quiz:

Note: I normally don’t suggest including messaging in your ad that isn’t reflected on the landing page (i.e. if your landing page doesn’t mention price, neither should your ad). However, if you don’t control the landing page as the paid media manager, the CTR of an ad copy test can point you in the right direction for what to add to your page, so it’s fair game in this instance.

3. “Tip the scales” with exactly enough information

There’s a widely-spread idea that landing pages for AdWords should be stripped of any features, links, or functionality other than a form. This is just not true, and blindly following this advice could be killing your conversion rates.

Unbounce co-founder Oli Gardner, frequently talks about the importance of landing page Attention Ratio:

Basically, your page should have one purpose, and you should avoid distractions.

This is great advice, especially for people who are tempted to drive AdWords traffic to a home page with no real CTA. But I find it has been misinterpreted and misapplied all over the internet by people who’ve twisted it into an incorrect “formula”, i.e.:

  • He who has the fewest links and options on the landing page wins.

That’s not how it works. People need links, content, choice, and context to make a decision. Not all links are bad; I’ve doubled conversion rates just by diverting PPC traffic from dedicated LPs to the website itself.

The question is, how much information does a visitor need in order to take action?

Ultimately you want to “tip the scales” of the decision-making process for your visitor – getting rid of unnecessary distractions, but keeping those essential ingredients that will help them go from “no” to “yes” or even “absolutely.”

Here are 2 very common mistakes that are killing conversion rates on landing pages across the internet:

Mistake #1: Single-option landing pages

You’ve heard all about the paradox of choice and analysis paralysis. You know that when people have too many options, they’re more likely to choose none at all. But what happens when you have too few?

If you don’t see what you want, you’re also going to say “no.”

As an example (that you probably won’t see in the wild but it’s nice and easy to illustrate), someone’s Googled a pizza delivery service. But the landing page allows someone to order pepperoni and pepperoni only, and our vegetarian searcher leaves to order elsewhere.

At first glance, this might look like our “organic grapes” problem from earlier, but something different is at play.

Many AdWords ads today are driving to single-option landing pages, where the only choice is to take the offer exactly as-is. This can be fine when only one variation exists, or your visitors have a chance to narrow their choices later in the process.

But if your visitors’ search is more broad, don’t take away their options in an effort to simplify the page. You’ll miss out on potential sales, which is kind of the whole point of running a campaign.

Instead, driving to a category page, or one that gives your visitors (gasp) – choice! – will keep them engaged. You may also consider creating several different types of landing pages for each specific option you offer to get specific after someone’s narrowed down their options via a broader landing page.

Mistake #2: The not-enough-info landing page

Another case of “When good landing page principles go bad” is the stripped-down, bare-bones dedicated landing page that has no useful information.

A disturbing and growing trend is for AdWords landing pages to feature no navigation, links, details, or information. There’s not even a way to visit the company domain from the landing page. This is a problem, because as the saying goes: A confused mind says no.

What’s going through your site visitors’ minds when they get to a landing page and can’t find what they need?

A landing page without enough information can be just as bad (or worse) than a landing page with too much.

Whether your traffic is warm or cold, coming from an email campaign or paid ads, arriving at your home page or a dedicated landing page, your visitors need to trust that you can solve their problems before they’ll convert on your offer.

Overall, just because someone’s clicked on an AdWords ad doesn’t mean they have fewer questions or less of a need for product details than if they came in from another channel. Remember to cover all your details of your offer in a logical information hierarchy, and don’t be afraid to give your visitors options to explore important info via lightboxes, or links where appropriate.

Getting control over your landing pages for AdWords

As a PPC manager, you may not directly control the landing page, but you can remind your team to avoid conversion killers like:

  • Key questions from the top keywords that aren’t answered on the landing page
  • No clear reason to take action
  • Landing pages where choice is limited unnecessarily, leaving more questions than answers
  • Landing pages that don’t explain what will happen after a visitor takes action on the offer
  • No way for visitors to have their questions answered

Give your visitors a reason to say yes, remove their reasons to say no, and watch your conversion rates improve.

View this article:  

Get Better Landing Pages for AdWords with 3 Techniques to Try Today

How Crazy Egg’s Heatmap Report Discovers Hotspots of High Click Activity on Quicksprout.com

At The Daily Egg, we publish a lot of content on how you can improve your online business. Today we’re going to show you what our product actually does for a change. The video above shows you our Heatmap Report and how it can be used to improve a web page. In this case, we used the Heatmap Report on a popular Quicksprout.com blog post. Like the narrator in the video says, “The Heatmap Report is all about clicks.” The “hotter” the heatmap appears, the more clicks that region on your web page is getting. This indicates what regions visitors…

The post How Crazy Egg’s Heatmap Report Discovers Hotspots of High Click Activity on Quicksprout.com appeared first on The Daily Egg.

Credit: 

How Crazy Egg’s Heatmap Report Discovers Hotspots of High Click Activity on Quicksprout.com

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

Reading Time: 2 minutes

What would you do with 21% more sales this month?

I bet you’d walk into your next meeting with your boss with an extra spring in your step, right?

Well, when you implement a strategic marketing optimization program, results like this are not only possible, they are probable.

In this new case study, you’ll discover how e-commerce software supplier, Ecwid, ran one experiment for four weeks, and saw a 21% increase in paid upgrades.

Get the full Ecwid case study now!

Download a PDF version of the Ecwid case study, featuring experiment details, supplementary takeaways and insights, and a testimonial from Ecwid’s Sr. Director, Digital Marketing.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

A little bit about Ecwid

Ecwid provides easy-to-use online store setup, management, and payment solutions. The company was founded in 2009, with the goal of enabling business-owners to add online stores to their existing websites, quickly and without hassle.

The company has a freemium business model: Users can sign up for free, and unlock more features as they upgrade to paid packages.

Ecwid’s partnership with WiderFunnel

In November 2016, Ecwid partnered with WiderFunnel with two primary goals:

  1. To increase initial signups for their free plan through marketing optimization, and
  2. To increase the rate of paid upgrades, through platform optimization

This case study focuses on a particular experiment cycle that ran on Ecwid’s step-by-step onboarding wizard.

The methodology

Last Winter, the WiderFunnel Strategy team did an initial LIFT Analysis of the onboarding wizard, and identified several potential barriers to conversion. (Both in terms of completing steps to setup a new store, and in terms of upgrading to a paid plan.)

The lead WiderFunnel Strategist for Ecwid, Dennis Pavlina, decided to create an A/B cluster test to 1) address the major barriers simultaneously, and 2) to get major lift for Ecwid, quickly.

The overarching goal was to make the onboarding process smoother. The WiderFunnel and Ecwid optimization teams hoped that enhancing the initial user experience, and exposing users to the wide range of Ecwid’s features, would result in more users upgrading to paid plans.

Dennis Pavlina

Ecwid’s two objectives ended up coming together in this test. We thought that if more new users interacted with the wizard and were shown the whole ‘Ecwid world’ with all the integrations and potential it has, they would be more open to upgrading. People needed to be able to see its potential before they would want to pay for it.

Dennis Pavlina, Optimization Strategist, WiderFunnel

The Results

This experiment ran for four weeks, at which point the variation was determined to be the winner with 98% confidence. The variation resulted in a 21.3% increase in successful paid account upgrades for Ecwid.

Read the full case study for:

  • The details on the initial barriers to conversion
  • How this test was structured
  • Which secondary metrics we tracked, and
  • The supplementary takeaways and customer insights that came from this test

The post [Case Study] Ecwid sees 21% lift in paid plan upgrades in one month appeared first on WiderFunnel Conversion Optimization.

See original article:

[Case Study] Ecwid sees 21% lift in paid plan upgrades in one month

How to Use Smarter Content to Build Laser-Focused Lists of Qualified Prospects

Laser Focus Content Marketing

Many companies invest a lot of time and money in content marketing. But very few are ever really successful with it. That’s because a lot of companies approach to content marketing as some sort of hands-off sorcery. They write blog post after blog post and then sit around and wait for something to happen (hint: nothing will happen). Instead, you should think of content as a type of currency – a strategic asset that you can use within a framework to drive business results. This requires a plan and a strategy for how you will use content and then which…

The post How to Use Smarter Content to Build Laser-Focused Lists of Qualified Prospects appeared first on The Daily Egg.

Continue reading here:

How to Use Smarter Content to Build Laser-Focused Lists of Qualified Prospects

How pilot pesting can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot pesting can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Credit: 

How pilot pesting can dramatically improve your user research