Tag Archives: study

How to Get Links For (Almost) Free: SEO Hacks For Tight Budgets

seo hacks

SEO can be expensive, but there are plenty of promotional opportunities that are free and cost-effective if you focus on value exchange and relationships. (Never invest in low-quality spam though — make sure any link building is strategic and carefully supervised). Warnings aside, there are ways to hack around tight budgets and achieve more for less with your link building. You don’t always have to invest in expensive advertising and publishing fees to gain good SEO traction. Here are some ways to get links, mentions, and shares without spending too much money (or time). Most of these tactics will help…

The post How to Get Links For (Almost) Free: SEO Hacks For Tight Budgets appeared first on The Daily Egg.

View original: 

How to Get Links For (Almost) Free: SEO Hacks For Tight Budgets

How pilot pesting can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot pesting can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Credit: 

How pilot pesting can dramatically improve your user research

How pilot testing can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot testing can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Source – 

How pilot testing can dramatically improve your user research

How to Leverage eCommerce Conversion Optimization Through Different Channels to Maximize Growth

Note: This is a guest article written by Sujan Patel, co-founder of Web Profits. Any and all opinions expressed in the post are Sujan’s.


“If you build it, they will come” only works in the movies. In the real world, if you’re serious about e-commerce success, it’s up to you to grab the CRO bull by the horns and make the changes needed to maximize your growth.

Yet, despite the potential of conversion rate optimization to have a major impact on your store’s bottom line, only 59% of respondents to an Econsultancy survey see it as crucial to their overall digital marketing strategy. And given that what’s out of sight is out of mind, you can bet that many of the remaining 41% of businesses aren’t prioritizing this strategy with the importance it deserves.

Implementing an e-commerce CRO program may seem complex, and it’s easy to get overwhelmed by the number of possible things to test. To simplify your path to proper CRO, we’ve compiled a list of ways to optimize your site by channel.

This list is by no means exclusive; every marketing channel supports as many opportunities for experimentation as you can dream up. Some of these, however, are the easiest to put into practice, especially for new e-commerce merchants. Begin with the tactics described here; and when you’re ready to take your campaigns to the next level, check out the following resources:

On-Page Optimization

Your website’s individual pages represent one of the easiest opportunities for implementing a conversion optimization campaign, thanks to the breadth of technology tools and the number of established testing protocols that exist currently.

These pages can also be one of the fastest, thanks to the direct impact your changes can have on whether or not website visitors choose to buy.

Home Page

A number of opportunities exist for making result-driven changes to your site’s home page. For example, you can test:

  • Minimizing complexity: According to ConversionXL, “simple” websites are scientifically better.
  • Increasing prominence and appeal of CTAs: If visitors don’t like what you’re offering as part of your call-to-action (or worse, if they can’t find your CTA at all), test new options to improve their appeal.
  • Testing featured offers: Even template e-commerce shops generally offer a spot for featuring specific products on your store’s home page. Test which products you place there, the price at which you offer them, and how you draw attention to them.
  • Testing store policies – Free shipping is known to reduce cart abandonment. Implement consumer-friendly policies and test the way you feature them on your site.
  • Trying the “five-second test” – Can visitors recall what your store is about in 5 seconds or less? Attention spans are short, and you might not have longer than that to convince a person to stick around. Tools like UsabilityHub can get you solid data.

Home Page Optimization Case Study

Antiaging skincare company NuFACE made the simple change of adding a “Free Shipping” banner to its site header.

Original

eCommerce conversion Optimization - Nuface Control

Test Variation

eCommerce conversion Optimization - Nuface Variation

The results of making this change alone were a 90% increase in orders (with a 96% confidence level) and a 7.32% lift in the average order value.

Product Pages

If you’re confident about your home page’s optimization, move on to getting the most out of your individual product pages by testing your:

  • Images and videos
  • Copy
  • Pricing
  • Inclusion of social proof, reviews, and so on

Product Page Optimization Case Study

Underwater Audio challenged itself to simplify the copy on its product comparison page, testing the new page against its original look.

Original

Underwater Audio Control

Test Variation

Underwater Control Variation - eCommerce conversion rate optimization

This cleaner approach increased website sales for Underwater Audio by 40.81%.

Checkout Flow

Finally, make sure customers aren’t getting hung up in your checkout flow by testing the following characteristics:

Checkout Flow Optimization Case Study

A Scandinavian gift retailer, nameOn, reduced the number of CTAs on their checkout page from 9 to 2.

Original

nameon-1

Test Variation

nameon-2

Making this change led to an estimated $100,000 in increased sales per year.

Lead Nurturing

Proper CRO doesn’t just happen on your site. It should be carried through to every channel you use, including email marketing. Give the following strategies a try to boost your odds of driving conversions, even when past visitors are no longer on your site.

Email Marketing

Use an established email marketing program to take the steps below:

Case Study

There are dozens of opportunities to leverage email to reach out to customers. According to Karolina Petraškienė of Soundest, sending a welcome email results in:

4x higher open rates and 5x higher click rates compared to other promotional emails. Keeping in mind that in e-commerce, average revenue per promotional email is $0.02, welcome emails on average result in 9x higher revenue — $0.18. And if it’s optimized effectively, revenue can be as high as $3.36 per email.”

Live Chat

LemonStand shares that “live chat has the highest satisfaction levels of any customer service channel, with 73%, compared with 61% for email and 44% for phone.” Add live chat to your store and test the following activities:

Case Study

LiveChat Inc.’s report on chat greeting efficiency shares the example of The Simply Group, which uses customized greetings to assist customers having problems at checkout. Implementing live chat has enabled them to convert every seventh greeting to a chat, potentially saving sales that would otherwise be lost.

Content Marketing

Content marketing may be one of the most challenging channels to optimize for conversions, given the long latency periods between reading content pieces and converting. The following strategies can help:

  • Tie content pieces to business goals.
  • Incorporate content upgrades.
  • Use clear CTAs within content.
  • Test content copy, messaging, use of social proof, and so on.
  • Test different distribution channels and content formats.

Case Study

ThinkGeek uses YouTube videos as a fun way to feature their products and funnel interested prospects back to their site. Their videos have been so successful that they’ve accumulated 180K+ subscribers who tune in regularly for their content.

thinkgeek

Post-Acquisition Marketing

According to Invesp, “It costs five times as much to attract a new customer, than to keep an existing one.” Continuing to market to past customers, either in the hopes of selling new items or encouraging referrals, is a great way to boost your overall performance.

Advocacy

Don’t let your CRO efforts stop after a sale has been made. Some of your past clients can be your best sources of new customers, if you take the time to engage them properly.

  • Create an advocacy program: Natural referrals happen, but having a dedicated program turbocharges the process.
  • Test advocacy activation programs: Install a dedicated advocacy management platform like RewardStream or ReferralSaaSquatch and test different methods for promoting your new offering to customers with high net promoter scores.
  • Test different advocate incentives: Try two-way incentives, coupon codes, discounted products, and more.
  • Invest in proper program launch, goal-setting, and ongoing evaluation/management: Customer advocacy programs are never truly “done.”

Case Study

Airbnb tested its advocacy program invitation copy and got better results with the more unselfish version.

airbnb

Reactivation

As mentioned above in the funnel-stage email recommendation, reactivation messages can be powerful drivers of CRO success.

Pay particular attention to these 2 activities:

  • Setting thresholds for identifying inactive subscribers
  • Building an automated reactivation workflow that’s as personalized as possible

Case Study

RailEasy increased opens by 31% and bookings by 38% with a reactivation email featuring a personalized subject line.

raileasy

Internal Efforts

Lastly, make CRO an ongoing practice by prioritizing it internally, rather than relegating it to “something the marketing department does.”

Ask CRO experts, and they’ll tell you that beyond the kinds of tactics and strategies described above, having a culture of experimentation and testing is the most important step you can take to see results from any CRO effort.

Here’s how to do it:

Have an idea for another way CRO can be used within e-commerce organizations? Leave your suggestions in the comments below.

0

0 ratings

How will you rate this content?

Please choose a rating

The post How to Leverage eCommerce Conversion Optimization Through Different Channels to Maximize Growth appeared first on VWO Blog.

This article is from: 

How to Leverage eCommerce Conversion Optimization Through Different Channels to Maximize Growth

Infographic: How To Create The Perfect Social Media Post

It’s easy to get into the habit of robotically posting content to social media every day. However, how you post to social media is just as important as the content itself. You need people to click on your post to see your content. So, before you rush around doing your daily social media tasks, pause, take a step back and think about how you can improve what you’re doing. Study this infographic and use it as a cheat sheet the next time you post to social media. One last hint: It’s a good idea to measure your social media efforts…

The post Infographic: How To Create The Perfect Social Media Post appeared first on The Daily Egg.

Continued here – 

Infographic: How To Create The Perfect Social Media Post

12 Eye-Opening Video Marketing Stats to Help Boost Your Landing Page Conversions

12 video marketing stats

Video marketing has been on the rise for more than a decade now. Consumers are getting more and more used to consuming video content wherever they go, be it on Facebook or on a product page. Which may make one think: Isn’t video content expected by now? Shouldn’t we produce a video every chance we get? However, the real question is: Will videos be a conversion ignitor or a conversion killer? Let’s find out! First, Some Tempting Stats… There are plenty of case studies and reports claiming that using a video on a landing page is a great idea for…

The post 12 Eye-Opening Video Marketing Stats to Help Boost Your Landing Page Conversions appeared first on The Daily Egg.

Link: 

12 Eye-Opening Video Marketing Stats to Help Boost Your Landing Page Conversions

Your growth strategy and the true potential of A/B testing

Reading Time: 7 minutes

Imagine being a leader who can see the future…

Who can know if a growth strategy will succeed or fail before investing in it.

Who makes confident decisions based on what she knows her users want.

Who puts proven ideas to work to cut spending and lift revenue.

Okay. Now stop imagining, because you can be that leader…right now. You just need the right tool. (And no, I’m not talking about a crystal ball.) I’m talking about testing.

Build a future-proof growth machine at your company

Get this free guide on how to build an optimization program that delivers continuous growth and insights for your business. Learn the in’s and out’s of the proven processes and frameworks used by brands like HP, ASICS, and 1-800 Flowers.



By entering your email, you’ll receive bi-weekly WiderFunnel Blog updates and other resources to help you become an optimization champion.

So many marketers approach “conversion optimization” and “A/B testing” with the wrong goals: they think too small. Their testing strategy is hyper focused on increasing conversions. Your Analytics team can A/B test button colors and copy tweaks and design changes until they are blue in the face. But if that’s all your company is doing, you are missing out on the true potential of conversion optimization.

Testing should not be a small piece of your overall growth strategy. It should not be relegated to your Analytics department, or shouldered by a single optimizer. Because you can use testing to interrogate and validate major business decisions.

“Unfortunately, most marketers get [conversion optimization] wrong by considering it to be a means for optimizing a single KPI (e.g – registrations, sales or downloads of an app). However conversion optimization testing is much much more than that. Done correctly with a real strategic process, CRO provides in-depth knowledge about our customers.

All this knowledge can then be translated into a better customer journey, optimized customer success and sales teams, we can even improve shipping and of course the actual product or service we provide. Every single aspect of our business can be optimized leading to higher conversion rates, more sales and higher retention rates. This is how you turn CRO from a “X%” increase in sign ups to complete growth of your business and company.

Once marketers and business owners follow a process, stop testing elements such as call to action buttons or titles for the sake of it and move onto testing more in-depth processes and strategies, only then will they see those uplifts and growth they strive for that scale and keep.”Talia Wolf, CMO, Banana Splash

Testing and big picture decision making should be intertwined. And if you want to grow and scale your business, you must be open to testing the fundamentals of said business.

Imagine spearheading a future-proof growth strategy. That’s what A/B testing can do for you.

In this post, I’m going to look at three examples of using testing to make business decisions. Hopefully, these examples will inspire you to put conversion optimization to work as a truly influential determinant of your growth strategy.

Testing a big business decision before you make it

Often, marketers look to testing as a way to improve digital experiences that already exist. When your team tests elements on your page, they are testing what you have already invested in (and they may find those elements aren’t working…)

  • “If I improve the page UX, I can increase conversions”
  • “If I remove distracting links from near my call-to-action button, I can increase conversions”
  • “If I add a smiling person to my hero image, I can capture more leads”, etc.

But if you want to stay consistently ahead of the marketing curve, you should test big changes before you invest in them. You’ll save money, time, resources. And, as with any properly-structured test, you will learn something about your users.

A B2C Example

One WiderFunnel client is a company that provides an online consumer information service—visitors type in a question and get an Expert answer.

The marketing leaders at this company wanted to add some new payment options to the checkout page of their mobile experience. After all, it makes sense to offer alternative payment methods like Apple Pay and Amazon Payments to mobile users, right?

Fortunately, this company is of a test-first, implement-second mindset.

With the help of WiderFunnel’s Strategy team, this client ran a test to identify demand for new payment methods before actually putting any money or resources into implementing said alternative payment methods.

This test was not meant to lift conversion rates. Rather, it was designed to determine which alternative payment methods users preferred.

Note: This client did not actually support the new payment methods when we ran this test. When a user clicked on the Apple Pay method, for instance, they saw the following message:

“Apple Pay coming soon!
We apologize for any inconvenience.
Please choose an available deposit method:
Credit Card | PayPal”

marketing-strategy-payment-options
Should this client invest in alternative payment methods? Only the test will tell!

Not only did this test provide the client with the insight they were looking for about which alternative payment methods their users prefer, but (BONUS!) it also produced significant increases in conversions, even though that was not our intention.

Because they tested first, this client can now invest in the alternative payment options that are most preferred by their users with confidence. Making a big business change doesn’t have to be a gamble.

As Sarah Breen of ASICS said,

We’re proving our assumptions with data. Testing allows me to say, ‘This is why we took this direction. We’re not just doing what our competitors do, it’s not just doing something that we saw on a site that sells used cars. This is something that’s been proven to work on our site and we’re going to move forward with it.’

Testing what you actually offer, part I

Your company has put a lot of thought (research, resources, money) into determining what you should actually offer. It can be overwhelming to even ask the question, “Is our product line actually the best offering A) for our users and B) for our business?”

But asking the big scary questions is a must. Your users are evolving, how they shop is evolving, your competition is evolving. Your product offering must evolve as well.

Some companies bring in experienced product consultants to advise them, but why not take the question to the people (aka your users)…and test your offering.

An E-commerce Example

Big scary question: Have you ever considered reducing the number of products you offer?

One WiderFunnel client offers a huge variety of products. During a conversation between our Strategists and the marketing leaders at this company, the idea to test a reduced product line surfaced.

The thinking was that even if conversions stayed flat with a fewer-products variation, this test would be considered a winner if the reduction in products meant money saved on overhead costs, such as operations costs, shipping and logistics costs, manufacturing costs and so on.

marketing-strategy-jam-study
The Jam Study is one of the most famous demonstrations of the Paradox of Choice.

Plus! There is a psychological motivator that backs up less-is-more thinking: The Paradox of Choice suggests that fewer options might mean less anxiety for visitors. If a visitor has less anxiety about which product is more suitable for them, they may have increased confidence in actually purchasing.

After working with this client’s team to cut down their product line to just the essential top 3 products, our Strategists created what they refer to as the ‘Minimalist’ variation. This variation will be tested against the original product page, which features many products.

marketing-strategy-product-offerings
This client’s current product category page features many products. The ‘Minimalist’ variation highlights just their top 3 products.

If the ‘Minimalist’ variation is a clear winner, this client will be armed with the information they need to consider halting the manufacture of several older products—a potentially dramatic cost-saving initiative.

Even if the variation is a loser, the insights gained could be game-changing. If the ‘Minimalist’ variation results in a revenue loss of 10%, but the cost of manufacturing all of those other products is more than 10%, this client would experience a net revenue gain! Which means, they would want to seriously consider fewer products as an option.

Regardless of the outcome, an experiment like this one will give the marketing decision-maker evidence to make a more informed decision about a fundamental aspect of their business.

Cutting products is a huge business decision, but if you know how your users will respond ahead of time, you can make that decision without breaking a sweat.

Testing what you actually offer, part II

Experienced marketers often assume that they know best. They assume they know what their user wants and needs, because they have ‘been around’. They may assume that, because everybody else is offering something, it is the best offering―(the “our-competitors-are-emphasizing-this-so-it-must-be-the-most-important-offering” mentality).

Well, here’s another big scary question: Does your offering reflect what your users value most? Rather than guessing, push your team to dig into the data, find the gaps in your user experience, and test your offering.

“Most conversion optimization work happens behind the scenes: the research process to understand the user. From the research you form various hypotheses for what they want and how they want it.

This informs [what] you come up with, and with A/B/n testing you’re able to validate market response…before you go full in and spend all that money on a strategy that performs sub-optimally.” Peep Laja, Founder, ConversionXL

A B2B Example

When we started working with SaaS company, Magento, they were offering a ‘Free Demo’ of the Enterprise Edition of their software. Offering a ‘Free Demo’ is a best practice for software companies—everybody does it and it was probably a no-brainer for Magento’s product team.

Looking at clickmap data, however, WiderFunnel’s Strategists noticed that Magento users were really engaged with the informational tabs lower down on the product page.

They had the option to try the ‘Free Demo’, but the data indicated that they were looking for more information. Unfortunately, once users had finished browsing tabs, there was nowhere else to go.

So, our Strategists decided to test a secondary ‘Talk to a specialist’ call-to-action.

marketing-strategy-magento-offering
Is the ‘Free Demo’ offering always what software shoppers are looking for?

This call-to-action hadn’t existed prior to this test, so the literal infinite conversion rate lift Magento saw in qualified sales calls was not surprising. What was surprising was the phone call we received 6 months later: Turns out the ‘Talk to a specialist’ leads were far more valuable than the ‘Get a free demo’ leads.

After several subsequent test rounds, “Talk to a specialist” became the main call-to-action on this page. Magento’s most valuable prospects value the opportunity to get more information from a specialist more than they value a free product demo. SaaS ‘best practices’ be damned.

Optimization is a way of doing business. It’s a strategy for embedding a test-and-learn culture within every fibre of your business.

– Chris Goward, Founder & CEO, WiderFunnel

You don’t need to be a mind-reader to know what your users want, and you don’t need to be a seer to know whether or not a big business change will succeed or flop. You simply need to test.

Leave your ego at the door and listen to what your users are telling you. Be the marketing leader with the answers, the leader who can see the future and can plan her growth strategy accordingly.

How do you use testing as a tool for making big business decisions? Let us know in the comments!

The post Your growth strategy and the true potential of A/B testing appeared first on WiderFunnel Conversion Optimization.

Link: 

Your growth strategy and the true potential of A/B testing

Making And Maintaining Atomic Design Systems With Pattern Lab 2


The benefits of UI design systems are now well known. They lead to more cohesive, consistent user experiences. They speed up your team’s workflow, allowing you to launch more stuff while saving huge amounts of time and money in the process. They establish a common vocabulary between disciplines, resulting in a more collaborative and constructive workflow.

Making And Maintaining Atomic Design Systems With Pattern Lab 2

They make browser, device, performance, and accessibility testing easier. And they serve as a solid foundation to build upon over time, helping your organization to more easily adapt to the ever-shifting web landscape. This article provides a detailed guide to building and maintaining atomic design systems with Pattern Lab 2.

The post Making And Maintaining Atomic Design Systems With Pattern Lab 2 appeared first on Smashing Magazine.

Visit site – 

Making And Maintaining Atomic Design Systems With Pattern Lab 2

Thumbnail

How VWO Affects your Site Speed

A recent post published on the ConversionXL blog highlighted a study conducted to find out how different A/B testing tools affect site speed. VWO was among the tools compared and we feel the results of the study are not an accurate reflection of our technology.

In this post, we are going to highlight some aspects of the study that had a large impact on the results and also elaborate on how the VWO code affects your page load time.

Sub-optimal Test Configuration with ‘DOM Ready’ Event Conditions

In the research conducted by OrangeValley, the test campaign was configured with sub-optimal settings within VWO. This means that rather than using the recommended WYSIWYG Visual Editor to change background image and text, a custom JavaScript code was used to make the changes. In addition, the code was configured to fire after the page had loaded (in JavaScript this is called a ‘DOM ready’ event). This impacted the result in two ways:

  1. The inclusion of the ‘DOM ready’ event made the VWO code wait for the entire page to load before it could display the changes made in the variation. The web page used in the study had a page load time of around 2.7 seconds.
    Here’s a snapshot of the campaign code from VWO settings (while the code waits for ‘DOM ready’):Sub-optimal way, waiting for DOM ready
    Here’s another snapshot of the campaign code (using the optimal way, without ‘DOM ready’): Optimal way, without waiting for DOM ready
    Executing after the DOM ready event will invariably increase the experienced loading time. Here’s the general case for it.
  2. To counter flicker and ensure a smooth loading experience for visitors, VWO hides all the relevant elements before they start loading on the page and only unhides them once the VWO code has swapped them with the variation changes. All this happens within a span of 110 milliseconds on average. If the campaign had been configured using the optimal manner, the flicker effect would have automatically been taken care of by our SmartCode.

Asynchronous v/s Synchronous Code

As clearly noted in the blog post, “Most A/B testing software create an additional step in loading and rendering a web page”. This happens because many A/B testing tools use synchronous code. VWO, with its asynchronous SmartCode is an exception to this.

With its unique asynchronous code, VWO ensures there is no delay in the load time of the page. VWO’s asynchronous code does not block the rendering of parts of the page and does not impact the page load time.

Asynchronous V/S synchronous Code Comparison

What is asynchronous code and why does VWO recommend it?

Simply put, asynchronous means that the code will contact VWO’s servers in the background, download and process the test package while the rest of your page continues to load and render in the browser normally.

With synchronous code, the browser has to wait for the package to download and then process it before loading the rest of the page. If for any reason the tracking code can’t contact its servers then the browser will wait, usually for 30 to 60 seconds, until the request times out. If your tracking code is in the tag, then your entire page won’t load and your visitor will be stuck with a blank page. Asynchronous code does not have this critical problem. If for any reason, the asynchronous VWO SmartCode can’t contact our servers your page will still download and render properly.

Private and Dynamic Content Delivery Network (CDN)

VWO has invested years of engineering effort to ensure that we build the best-in-class technology and infrastructure for our customers. An important part of this is setting up our own private CDN which uses Bare Metal Servers from IBM SoftLayer’s state-of-the-art global data centers. This ensures that we always have a server close to your visitor’s location and there is minimum latency, reducing critical download time.

VWO CDN Map

VWO’s CDN is also dynamic which caches not just the static code required to run tests but also generates dynamic test variation data. This has an edge over regular CDNs since they only cache static data. By dynamically generating test packages only the relevant variation data is sent to each visitor depending on the URL they are on.

Small Size of VWO’s Test Packages

Another factor that significantly impacts page load time is the size of test packages, which determines how long it will take for the browser to download it. VWO ensures small package sizes through two methods: by intelligently understanding and recording only the precise changes that you make to your page, and individually packaging tests for a visitor to deliver only the relevant content for the URL they are on.

Let us suppose you edit the HTML of your product page template to make two changes; increase the font-size of your text and insert a ‘recommended’ icon. VWO compares your changes to the original code and detects the precise edits – the change made to the CSS property “font-size” and the insertion of a new graphic. Other systems will save the entire block of code which will also convert the dynamic content into static content and will end up showing the same description across all product pages.

Small size of VWO’s test packages

VWO’s CDN is custom built for optimal A/B testing performance. It unbundles the payload to make smaller packages for each individual test and URL. Some A/B testing tools bundle all the data for all tests running on a domain into one large package. We’ve seen package sizes of up to 3MB when a website is running many tests, which obviously increases page load time. VWO, on the other hand, only sends the data required for the test running on a particular URL to make a small, tidy payload which downloads quickly. This is especially advantageous when you’re a power-testing team running multiple tests on different parts of your domain.

VWO - Small size of test files

Having said all this, we are confident that VWO’s best-in-class technology coupled with optimal campaign settings will ensure that your website never slows down. We would also like to invite OrangeValley to work with us on setting up their campaign correctly so that they can present a true comparison of all the tools in their study.

As always, if you still have any doubts or clarifications to seek in this regard, please feel free to reach out to me directly at sparsh@wingify.com.

Happy Optimizing!

The post How VWO Affects your Site Speed appeared first on VWO Blog.

Source:

How VWO Affects your Site Speed

Guide To Using WebP Images Today (A Case Study)


They say a picture is worth a thousand words. But online, a picture can be worth a thousand kilobytes or more! HTTP Archive shows that images make up 64% of a web page’s total size on average. Given this, image optimization is key, especially considering that many users will abandon a request if it doesn’t load within a few seconds.

WebP Images And Performance

The problem with image optimization is that we want to keep file sizes small without sacrificing quality. Past attempts to create file types that optimize images better than the standard JPEG, PNG and GIF formats have been unsuccessful.

The post Guide To Using WebP Images Today (A Case Study) appeared first on Smashing Magazine.

Link:

Guide To Using WebP Images Today (A Case Study)