Tag Archives: following

Thumbnail

Building A Central Logging Service In-House




Building A Central Logging Service In-House

Akhil Labudubariki



We all know how important debugging is for improving application performance and features. BrowserStack runs one million sessions a day on a highly distributed application stack! Each involves several moving parts, as a client’s single session can span multiple components across several geographic regions.

Without the right framework and tools, the debugging process can be a nightmare. In our case, we needed a way to collect events happening during different stages of each process in order to get an in-depth understanding of everything taking place during a session. With our infrastructure, solving this problem became complicated as each component might have multiple events from their lifecycle of processing a request.

That’s why we developed our own in-house Central Logging Service tool (CLS) to record all important events logged during a session. These events help our developers identify conditions where something goes wrong in a session and helps keep track of certain key product metrics.

Debugging data ranges from simple things like API response latency to monitoring a user’s network health. In this article, we share our story of building our CLS tool which collects 70G of relevant chronological data per day from 100+ components reliably, at scale and with two M3.large EC2 instances.

The Decision To Build In-House

First, let’s consider why we built our CLS tool in-house rather than used an existing solution. Each of our sessions sends 15 events on average, from multiple components to the service – translating into approximately 15 million total events per day.

Our service needed the ability to store all this data. We sought a complete solution to support event storing, sending and querying across events. As we considered third-party solutions such as Amplitude and Keen, our evaluation metrics included cost, performance in handling high parallel requests and ease of adoption. Unfortunately, we could not find a fit that met all our requirements within budget – although benefits would have included saving time and minimizing alerts. While it would take additional effort, we decided to develop an in-house solution ourselves.


Building in-house


One of the biggest issues with building In-house is the amount of resources that we need to spend to maintain it. (Image credit: Source: Digiday)

Technical Details

In terms of architecting for our component, we outlined the following basic requirements:

  • Client Performance
    Does not impact the performance of the client/component sending the events.
  • Scale
    Able to handle a high number of requests in parallel.
  • Service performance
    Quick to process all events being sent to it.
  • Insight into data
    Each event logged needs to have some meta information to be able to uniquely identify the component or user, account or message and give more information to help the developer debug faster.
  • Queryable interface
    Developers can query all events for a particular session, helping to debug a particular session, build component health reports, or generate meaningful performance statistics of our systems.
  • Faster and easier adoption
    Easy integration with an existing or new component without burdening teams and taking up their resources.
  • Low maintenance
    We are a small engineering team, so we sought a solution to minimize alerts!

Building Our CLS Solution

Decision 1: Choosing An Interface To Expose

In developing CLS, we obviously didn’t want to lose any of our data, but we didn’t want component performance to take a hit either. Not to mention the additional factor of preventing existing components from becoming more complicated, since it would delay overall adoption and release. In determining our interface, we considered the following choices:

  1. Storing events in local Redis in each component, as a background processor pushes it to CLS. However, this requires a change in all components, along with an introduction of Redis for components which didn’t already contain it.
  2. A Publisher – Subscriber model, where Redis is closer to the CLS. As everyone publishes events, again we have the factor of components running across the globe. During the time of high-traffic, this would delay components. Further, this write could intermittently jump up to five seconds (due to the internet alone).
  3. Sending events over UDP, which offers a lesser impact on application performance. In this case data would be sent and forgotten, however, the disadvantage here would be data loss.

Interestingly, our data loss over UDP was less than 0.1 percent, which was an acceptable amount for us to consider building such a service. We were able to convince all teams that this amount of loss was worth the performance, and went ahead to leverage a UDP interface that listened to all events being sent.

While one result was a smaller impact on an application’s performance, we did face an issue as UDP traffic was not allowed from all networks, mostly from our users’ – causing us in some cases to receive no data at all. As a workaround, we supported logging events using HTTP requests. All events coming from the user’s side would be sent via HTTP, whereas all events being recorded from our components would be via UDP.

Decision 2: Tech Stack (Language, Framework & Storage)

We are a Ruby shop. However, we were uncertain if Ruby would be a better choice for our particular problem. Our service would have to handle a lot of incoming requests, as well as process a lot of writes. With the Global Interpreter lock, achieving multithreading or concurrency would be difficult in Ruby (please don’t take offense – we love Ruby!). So we needed a solution that would help us achieve this kind of concurrency.

We were also keen to evaluate a new language in our tech stack, and this project seemed perfect for experimenting with new things. That’s when we decided to give Golang a shot since it offered inbuilt support for concurrency and lightweight threads and go-routines. Each logged data point resembles a key-value pair where ‘key’ is the event and ‘value’ serves as its associated value.

But having a simple key and value is not enough to retrieve a session related data – there is more metadata to it. To address this, we decided any event needing to be logged would have a session ID along with its key and value. We also added extra fields like timestamp, user ID and the component logging the data, so that it became more easy to fetch and analyze data.

Now that we decided on our payload structure, we had to choose our datastore. We considered Elastic Search, but we also wanted to support update requests for keys. This would trigger the entire document to be re-indexed, which might affect the performance of our writes. MongoDB made more sense as a datastore since it would be easier to query all events based on any of the data fields that would be added. This was easy!

Decision 3: DB Size Is Huge And Query And Archiving Sucks!

In order to cut maintenance, our service would have to handle as many events as possible. Given the rate that BrowserStack releases features and products, we were certain the number of our events would increase at higher rates over time, meaning our service would have to continue to perform well. As space increases, reads and writes take more time – which could be a huge hit on the service’s performance.

The first solution we explored was moving logs from a certain period away from the database (in our case, we decided on 15 days). To do this, we created a different database for each day, allowing us to find logs older than a particular period without having to scan all written documents. Now we continually remove databases older than 15 days from Mongo, while of course keeping backups just in case.

The only leftover piece was a developer interface to query session-related data. Honestly, this was the easiest problem to solve. We provide an HTTP interface, where people can query for session related events in the corresponding database in the MongoDB, for any data having a particular session ID.

Architecture

Let’s talk about the internal components of the service, considering the following points:

  1. As previously discussed, we needed two interfaces – one listening over UDP and another listening over HTTP. So we built two servers, again one for each interface, to listen for events. As soon as an event arrives, we parse it to check whether it has the required fields – these are session ID, key, and value. If it does not, the data is dropped. Otherwise, the data is passed over a Go channel to another goroutine, whose sole responsibility is to write to the MongoDB.
  2. A possible concern here is writing to the MongoDB. If writes to the MongoDB are slower than the rate data is received, this creates a bottleneck. This, in turn, starves other incoming events and means dropped data. The server, therefore, should be fast in processing incoming logs and be ready to process ones upcoming. To address the issue, we split the server into two parts: the first receives all events and queues them up for the second, which processes and writes them into the MongoDB.
  3. For queuing we chose Redis. By dividing the entire component into these two pieces we reduced the server’s workload, giving it room to handle more logs.
  4. We wrote a small service using Sinatra server to handle all the work of querying MongoDB with given parameters. It returns an HTML/JSON response to developers when they need information on a particular session.

All these processes happily run on a single m3.large instance.


CLS v1


CLS v1: A representation of the system’s first architecture. All the components are running on one single machine.

Feature Requests

As our CLS tool saw more use over time, it needed more features. Below, we discuss these and how they were added.

Missing Metadata

Gradually as the number of components in BrowserStack increases, we’ve demanded more from CLS. For example, we needed the ability to log events from components lacking a session ID. Otherwise obtaining one would burden our infrastructure, in the form of affecting application performance and incurring traffic on our main servers.

We addressed this by enabling event logging using other keys, such as terminal and user IDs. Now whenever a session is created or updated, CLS is informed with the session ID, as well as the respective user and terminal IDs. It stores a map that can be retrieved by the process of writing to MongoDB. Whenever an event that contains either the user or terminal ID is retrieved, the session ID is added.

Handle Spamming (Code Issues In Other Components)

CLS also faced the usual difficulties with handling spam events. We often found deploys in components that generated a huge volume of requests sent to CLS. Other logs would suffer in the process, as the server became too busy to process these and important logs were dropped.

For the most part, most of the data being logged were via HTTP requests. To control them we enable rate limiting on nginx (using the limit_req_zone module), which blocks requests from any IP we found hitting requests more than a certain number in a small amount of time. Of course, we do leverage health reports on all blocked IPs and inform the responsible teams.

Scale v2

As our sessions per day increased, data being logged to CLS was also increasing. This affected the queries our developers were running daily, and soon the bottleneck we had was with the machine itself. Our setup consisted of two core machines running all of the above components, along with a bunch of scripts to query Mongo and keep track of key metrics for each product. Over time, data on the machine had increased heavily and scripts began to take a lot of CPU time. Even after trying to optimizing Mongo queries, we always came back to the same issues.

To solve this, we added another machine for running health report scripts and the interface to query these sessions. The process involved booting a new machine and setting up a slave of the Mongo running on the main machine. This has helped reduce the CPU spikes we saw every day caused by these scripts.


CLS v2


CLS v2: A representation of the current system’s architecture. Logs are written to the master machine and they are synced on the slave machine. Developer’s queries run on the slave machine.

Conclusion

Building a service for a task as simple as data logging can get complicated, as the amount of data increases. This article discusses the solutions we explored, along with challenges faced while solving this problem. We experimented with Golang to see how well it would fit with our ecosystem, and so far we have been satisfied. Our choice to create an internal service rather than paying for an external one has been wonderfully cost-efficient. We also didn’t have to scale our setup to another machine until much later – when the volume of our sessions increased. Of course, our choices in developing CLS were completely based on our requirements and priorities.

Today CLS handles up to 15 million events every day, constituting up to 70 GB of data. This data is being used to help us solve any issues our customers face during any session. We also use this data for other purposes. Given the insights each session’s data provides on different products and internal components, we’ve begun leveraging this data to keep track of each product. This is achieved by extracting the key metrics for all the important components.

All in all, we’ve seen great success in building our own CLS tool. If it makes sense for you, I recommend you consider doing the same!

Smashing Editorial
(rb, ra, il)


Jump to original:  

Building A Central Logging Service In-House

Thumbnail

FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process




FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Zoe Dimov



Today, UX research has earned wide recognition as an essential part of product and service design. However, UX professionals still seem to be facing two big problems when it comes to UX research: A lack of engagement from the team and stakeholders as well as the pressure to constantly reduce the time for research.

In this article, I’ll take a closer look at each of these challenges and propose a new approach known as ‘FAST UX’ in order to solve them. This is a simple but powerful tool that you can use to speed up UX research and turn stakeholders into active champions of the process.

Contrary to what you might think, speeding up the research process (in both the short and long term) requires effective collaboration, rather than you going away and soldiering on by yourself.

The acronym FAST (Focus, Attend, Summarise, Translate) wraps up a number of techniques and ideas that make the UX process more transparent, fun, and collaborative. I also describe a 5-day project with a central UK government department that shows you how the model can be put into practice.

The article is relevant for UX professionals and the people who work with them, including product owners, engineers, business analysts, scrum masters, marketing and sales professionals.

1. Lack Of Engagement Of The Team And Stakeholders

“Stakeholders have the capacity for being your worst nightmare and your best collaborator.”

UIE (2017)

As UX researchers, we need to ensure that “everyone in our team understands the end users with the same empathy, accuracy and depth as we do.” It has been shown that there is no better alternative to increasing empathy than involving stakeholders to actually experience the whole process themselves: from the design of the study (objectives, research questions), to recruitment, set up, fieldwork, analysis and the final presentation.

Anyone who has tried to do this knows that it can be extremely difficult to organize and get stakeholders to participate in research. There are two main reasons for this:

  1. Research is somebody else’s job.
    In my experience, UX professionals are often hired to “do the UX” for a company or organization. Even though the title of “Lead UX Researcher” sounds great and very important in my head, it often leads to misconceptions during kick-off meetings. Everyone automatically assumes that research is solely MY responsibility. It’s no wonder that stakeholders don’t want to get involved in the project. They assume research is my and nobody else’s job.
  2. UX process frameworks are incomplete.
    The problem is that even when stakeholders want to engage and participate in UX, they still do not know *how* they should get involved and *what* they should do. We spend a lot of time selling a UX process and research frameworks that are useful but ultimately incomplete — they do not explain how non-researchers can get involved in the research process.

Problems associated with stakeholders involvement in UX Research.


Fig. 1. Despite our enthusiasm as researchers, stakeholders often don’t understand how to get involved with the research process.

Further, a lot of stakeholders can find words such as ‘design,’ ‘analysis’ or ‘fieldwork’ intimidating or irrelevant to what they do. In fact, “UX is rife with jargon that can be off-putting to people from other fields.” In some situations, terms are familiar but mean something completely different, e.g., research in UX versus marketing research.

2. Pressure To Constantly Reduce The Time For Research

Another issue is that there is a constantly growing pressure to speed up the UX process and reduce the time spent on research. I cannot count the number of times when a project manager asked me to shorten a study even further by skipping the analysis stage or the kick-off sessions.

While previously you could spend weeks on research, a 5-day research cycle is increasingly becoming the norm. In fact, the book Sprint describes how research can dwindle to just a day (from an overall 5-day cycle).

Considering this, there is a LOT of pressure on UX researchers to deliver fast, without compromising the quality of the study. The difficulty increases when there are multiple stakeholders, each with their own opinions, demands, views, assumptions, and priorities.

The Fast UX Approach

Contrary to what you might think, reducing the time it takes to do UX research does not mean that you need to soldier on by yourself. I have done this and it only works in the short term. It does not matter how amazing the findings are — there is not enough PowerPoint slides in the world to convince a team of the urgency to take action if they have not been on the research journey themselves.

In the long term, the more actively engaged your team and stakeholders are in the research, the more empowered they will feel and the more willing they will be to take action. Productive collaboration also means that you can move together at a quicker pace and speed up the whole research process.

The FAST UX Research framework (see Fig. 2 below) is a tool to truly engage team members and stakeholders in a way that turns them into active advocates and champions of the research process. It shows non-researchers when and how they should get involved in UX Research.


The FAST UX Research approach; FAST UX Research methodology.


Fig. 2. The FAST User Experience Research framework

In essence, stakeholders take ownership of each of the UX research stages by carrying out the four activities, each corresponding to its research stage.

Working together reduces the time it takes for UX Research. The true benefit of the approach, however, is that, in the long term, it takes less and less time for the business to take action based on research findings as people become true advocates of user-centricity and the research process.

This approach can be applied to any qualitative research method and with any team. For example, you can carry out FAST usability testing, FAST interviews, FAST ethnography, and so on. In order to be effective, you will need to explain this approach to your stakeholders from the start. Talk them through the framework, explaining each stage. Emphasize that this is what EVERYONE does, that it’s their work as much as the UX researcher’s job, and that it’s only successful if everyone is involved throughout the process.

Stage 1: Focus (Define A Common Goal)

There is a uniform consensus within UX that a research project should start by defining its purpose: why is this research done and how will the results be acted upon?


Focus in FAST UX Research; first stage in the FAST UX Research process.


Fig. 3. Focus is about defining clear objectives and goals for the research and it’s ultimately the team’s and all stakeholders’ shared responsibility to do this.

Generally, this is expressed within the research goals, objectives, research questions and/or hypotheses. Most projects start with a kick-off meeting where those are either discussed (based on an available brief) or are defined during the meeting.

The most regular problem with kick-off sessions like these is that stakeholders come up with too many things they want to learn from a study. The way to turn the situation around is to assign a specific task to your immediate team (other UX professionals you work with) and stakeholders (key decision makers): they will help focus the study from the beginning.

The way they will do that is by working together through the following steps:

  1. Identify as a group the current challenges and problems.
    Ask someone to take notes on a shared document; alternatively, ask everyone to participate and write on sticky notes which are then displayed on a “project wall” for everyone to see.
  2. Identify the potential objectives and questions for a research study.
    Do this the same way you did the previous step. You don’t need to commit to anything yet.
  3. Prioritize.
    Ask the team to order the objectives and questions, starting with the most important ones.
  4. Reword and rephrase.
    Look at the top 3 questions and objectives. Are they too broad or narrow? Could they be reworded so it’s clearer what is the focus of the study? Are they feasible? Do you need to split or merge objectives and questions?
  5. Commit to be flexible.
    Agree on the top 1-2 objectives and ensure that you have agreement from everyone that this is what you will focus on.

Here are some questions you can ask to help your stakeholders and team to get to the focus of the study faster:

  • From the objectives we have recognized, what is most important?
  • What does success look like?
  • If we only learn one thing, which one would be the most important one?

Your role during the process is to provide expertise to determine if:

  • The identified objectives and questions are feasible for a single study;
  • Help with the wording of objectives and questions;
  • Design the study (including selecting a methodology) after the focus has been identified.

At first sight, the Focus and Attend (next stages) activities might be familiar as you are already carrying out a kick-off meeting and inviting stakeholders to attend research sessions.

However, adopting a FAST approach means that your stakeholders have as much ownership as you do during the research process because work is shared and co-owned. Reiterate that the process is collaborative and at the end of the session, emphasize that agreeing on clear research objectives is not easy. Remind everybody that having a shared focus is already better than what many teams start with.

Finally, remind the team and your stakeholders what they need to do during the rest of the process.

Stage 2: Attend (Immerse The Team Deeply In The Research Process)

Seeing first hand the experience of someone using a product or service is so rich that there is no substitute for it. This is why getting stakeholders to observe user research is still considered one of the best and most powerful ways to engage the team.


Attend in FAST UX Research; second stage in FAST UX Research.


Fig. 4. Attend in FAST UX Research is about encouraging the team and stakeholders to be present at all research sessions, but also to be actively engaged with the research.

What often happens is that observers join in on the day of the research study and then they spend the time plastered to their laptops and mobile phones. What is worse, some stakeholders often talk to the note-taker and distract the rest of the design team who need to observe the sessions.

This is why it is just as important that you get the team to interact with the research. The following activities allow the team to immerse themselves in the research session. You can ask stakeholders to:

  • Ask questions during the session through a dedicated live chat (e.g. Slack, Google Hangouts, Skype);
  • Take notes on sticky notes;
  • Summarize observations for everyone (see next stage).

Assign one person per session for each of these activities. Have one “live chat manager,” one “note-taker,” and one “observer” who will sum up the session afterwards.

Rotate people for the next session.

Before the session, it’s useful to walk observers through the ‘ground rules’ very briefly. You can have a poster similar to the one GDS developed that will help you do this and remind the team of their role during the study (see Fig. 3 above).


An observation poster; user research poster.


Fig. 5. A poster can be hanged in the observation room and used to remind the team and stakeholders what their responsibilities are and the ground rules during observation.

Farrell (2017) provides more detail on effective ways for stakeholders to take notes together. When you have multiple stakeholders and it’s not feasible for them to physically attend a field visit (e.g. on the street, in an office, at the home of the participant), you could stream the session to an observation room.

Stage 3: Summarize (Analysis For Non-Researchers)

I am a strong supporter of the idea that analysis starts the moment fieldwork begins. During the very first research session, you start looking for patterns and interpretation of what the data you have means.


Summarize in FAST UX Research; the third stage in FAST UX Research.


Fig. 6. Summarize in FAST UX Research is about asking the team and your stakeholders to tell you about what they thought were the most interesting aspects of user research.

Even after the first session (but typically towards the end of fieldwork) you can carry out collaborative analysis: a fun and productive way that ensures that you have everyone participating in one of the most important stages of research.

The collaborative analysis session is an activity where you provide an opportunity for everyone to be heard and create a shared understanding of the research.

Since you’re including other experts’ perspectives, you’re increasing the chances to identify more objective and relevant insights, and also for stakeholders to act upon the results of the study.

Even though ‘analysis’ is an essential part of any research project, a lot of stakeholders get scared by the word. The activity sounds very academic and complex. This is why at the end of each research session, research day, or the study as a whole, the role of your stakeholders and immediate team is to summarize their observations. Summarizing may sound superfluous but is an important part of the analysis stage; this is essentially what we do during “Downloading” sessions.

Listening to someone’s summary provides you with an opportunity to understand:

  • What they paid attention to;
  • What is important for them;
  • Their interpretation of the event.

Summary At The End Of Each Session

You do this by reminding everyone at the beginning of the session that at the end you will enter the room and ask them to summarize their observations and recommendations.

You then end the session by asking each stakeholder the following:

  • What were their key observations (see also Fig. 3)?
    • What happened during the session?
    • Were there any major difficulties for the participant?
    • What were the things that worked well?
  • Was there anything that surprised them?

This will make the team more attentive during the session, as they know that they will need to sum it up at the end. It will also help them to internalize the observations (and later, transition more easily to findings).

This is also the time to consistently share with your team what you think stands out from the study so far. Avoid the temptation to do a ‘big reveal’ at the end. It’s better if outcomes are told to stakeholders many times.

On multiple occasions, research has given me great outcomes. Instead of sharing them regularly, I keep them to myself until the final report. It doesn’t work well. A big reveal at the end leads to bewildered stakeholders who often cannot jump from observations to insights as quickly. As a result, there is either stubborn pushback or indifferent shrugs.

Summary At The End Of The Day

A summary of the event or the day can then naturally transition into a collaborative analysis session. Your job is to moderate the session.

The job of your stakeholders is to summarize the events of the day and the final results. Ask a volunteer to talk the group through what happened during the day. Other stakeholders can then add to these observations.

Summary At The End Of The Study

After the analysis is done, ask one or two stakeholders to summarize the study. Make sure they cover why we did research, what happened during the study and what are the primary findings. They can also do this by walking through the project wall (if you have one).

It’s very difficult not to talk about your research and leave someone else to do it. But it’s worth it. No matter how much you’re itching to do this yourself — don’t! It’s a great opportunity for people to internalize research and become comfortable with the process. This is one of the key moments to turn stakeholders into active advocates of user research.

At the end of this stage, you should have 5-7 findings that capture the study.

Stage 4: Translate (Make Stakeholders Active Champions Of The Solution)

“Research doesn’t have a value unless it results in decisions and actions.”

—Lang and Howell (2017).

Even when you agree with the findings, stakeholders might still disagree about what the research means or lack commitment to take further action. This is why after summarizing, ask your stakeholders to work with you and identify the “Now what?” or what it all means for the organization, product, service, team and/or individually for each one of them.


Translate in FAST UX Research; the fourth stage in FAST UX Research.


Fig. 7. Translate in FAST UX Research is about asking the team or individual stakeholders to discuss each of the findings and articulate how it will impact the business, the service, and product or their work.

Traditionally, it was the UX researchers’ job to write clear, precise, descriptive findings, and actionable recommendations. However, if the team and stakeholders are not part of identifying actionable recommendations, they might be resistant towards change in future.

To prevent later pushback, ask stakeholders to identify the “Now what?” (also referred to as ‘actionable recommendations’). Together, you’ll be able to identify how the insights and findings will:

  • Affect the business and what needs to be done now;
  • Affect the product/service and what changes do we need to make;
  • Affect people individually and the actions they need to take;
  • Lead to potential problems and challenges and their solutions;
  • Help solve problems or identify potential solutions.

Stakeholders and the team can translate the findings at the end of a collaborative analysis session.

If you decide to separate the activities and conduct a meeting in which the only focus is on actionable recommendations, then consider the following format:

  1. Briefly talk through the 5-7 main findings from the study (as a refresher if this stage is done separately from the analysis session or with other stakeholders).
  2. Split the group into teams and ask them to work on one finding/problem at a time.
  3. Ask them to list as many ways they see the finding affecting them.
  4. Ask one person from each group to present the findings back to the team.
  5. Ask one/two final stakeholders to summarize the whole study, together with the methods, findings, and recommendations.

Later, you can have multiple similar workshops; this is how you get to engage different departments from the organization.

Fast UX In Practice

An excellent example of a FAST UX Research approach in practice is a project I was hired to carry out for a central UK government department. The ultimate goal of the project was to identify user requirements for a very complex internal system.

At first sight, this was a very challenging project because:

  • There was no time to get to know the department or the client.
    Usually, I would have at least a week or two to get to know the client, their needs, opinions, internal pressures, and challenges. For this project, I had to start work on Monday with a team I had never met; in a building I had never worked, in a domain I knew little about, and finish on Friday the same week.
  • The system was very complex and required intense research.
    The internal system and the nature of work were very complex; this required gathering data with at least a few research methods (for triangulation).
  • This was the first time the team had worked with a UX Researcher.
    The stakeholders were primarily IT specialists. However, I was lucky that they were very keen and enthusiastic to be involved in the project and get their hands dirty.
  • Stakeholder availability.
    As is the case on many other projects, all stakeholders were extremely busy as they had their own work on top of the project. Nonetheless, we made it work, even if it meant meeting over lunch, or for a 15-minute wrap up before we went home.
  • There were internal pressures and challenges.
    As with any department and huge organization, there were a number of internal pressures and challenges. Some of them I expected (e.g. legacy systems, slow pace of change) but some I had no clue about when I started.
  • We had to coordinate work with external teams.
    An additional challenge was the need to work with and coordinate efforts with external teams at another UK department.

Despite all of these challenges, this was one of the most enjoyable projects I have worked on because of the tight collaboration initiated by the FAST approach.

The project consisted of:

  • 1 day of kick-off sessions and getting to know the team
  • 2,5 days of contextual inquiries and shadowing of internal team members,
  • Half a day for a co-creation workshop, and
  • 1 day for analysis and results reporting.

In the process, I gathered data from 20+ employees, had 16+ hours of observations, 300+ photos and about 100 pages of notes. Here is a great example of cramming in 3 weeks’ worth of work into a mere 5-day research cycle. More importantly, people in the department were really excited about the process.

Here is how we did it using a FAST UX Research approach:

  • Focus
    At the beginning of the project, the two key stakeholders identified what the focus of research would be while my role was mainly to help with prioritizing the objectives, tweak the research questions and check for feasibility. In this sense, I listened and mainly asked questions, interjecting occasionally with examples from previous projects or options that helped to adjust our approach.

    While I wrote the main discussion guide for the contextual inquiries and shadowing sessions, we sat together with the primary team to discuss and design the co-creation workshop with internal users of the system.

  • Attend
    During the workshop one of the stakeholders moderated half of the session, while the other took notes and observed closely the participants. It was a huge success internally, as stakeholders felt there was better visibility for their efforts to modernize the department, while employees felt listened to and involved in the research.
  • Summarize
    Immediately after the workshop, we sat together with the stakeholders for a 30-minute meeting where I had them summarize their observations.

    As a result of the shadowing, contextual inquiries and co-creation workshop, we were able to identify 60+ issues and problems with the internal system (with regards to integration, functionality, and usability), all captured in six high-level findings.

  • Translate
    Later, we discussed with the team how each of the six major findings translated to a change or implication for the department, the internal system, as well as collaboration with other departments.

We were so perfectly aligned with the team that when we had to talk about our work in front of another UK government department, I could let the stakeholders talk about the process and our progress.

My final task (over two additional days) was to document all of the findings in a research report. This was necessary as a knowledge repository because I had to move onto other projects.

With a more traditional approach, the project could have easily spanned 3 weeks. More importantly, quickly understanding individual and team pressures and challenges were the keys to the success of the new system. This could not have happened within the allocated time without a collaborative approach.

A FAST UX approach resulted in tight collaboration, strong co-ownership and a shared sense of progress; all of those allowed to shorten the time of the project, but also to instill a feeling of excitement about the UX research process.

Have You Tried It Out Already?

While UX research becomes ever more popular, gone are the days when we could soldier on by ourselves and only consult stakeholders at the end.

Mastering our craft as UX researchers means engaging others within the process and being articulate, clear, and transparent about our work. The FAST approach is a simple model that shows how to engage non-researchers with the research process. Reducing the time it takes to do research, both in the short (i.e. the study itself) and long term (i.e. using the research results), is a strategic advantage for the researcher, team, and the business as a whole.

Would you like to improve your efficiency and turn stakeholders into user research advocates? Go and try it out. You can then share your stories and advice here.

I would love to hear your comments, suggestion and any feedback you care to share! If you have tried it out already, do you have success stories you want to share? Be as open as you can — what worked well, and what didn’t? As with all other things UX, it’s most fun if we learn together as a team.

Smashing Editorial
(cc, ra, il)


Visit source: 

FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Thumbnail

How To Improve Your Design Process With Data-Based Personas




How To Improve Your Design Process With Data-Based Personas

Tim Noetzel



Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

Hypothetical Personas


First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says “yes, that resonates” but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

Data-Driven Personas Interview Format


You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data
Description of the user including predictive demographics. The percentage of our current user base the persona represents.
Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona.
The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.

Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

Smashing Editorial
(rb, ra, yk, il)


Original source: 

How To Improve Your Design Process With Data-Based Personas

Monthly Web Development Update 3/2018: Service Workers, Building A CDN, And Cheating At Design

Service Worker is probably one of the most misrepresented technologies we currently have. When I hear people talking about it, the topic almost always revolves around serving an app when a user is offline. However, Service Worker can do so much more than that, and every week I come across new articles that show how powerful the technology really is.

This month, for example, we can learn how to use Service Worker for cross-tab messaging and to load off requests into the background with the Background Sync API. I think the toolset we now have in our browsers already allows us to build great experiences regardless of the network state. Now it’s up to us to make the experiences so great that users truly love them. And that’s probably the hardest part.

News

Sketch 49
Sketch 49 has arrived, and with it comes the new Prototyping in Sketch feature which lets you see the entire flow in action. (Image credit)

General

  • Ed Ellson examined Chrome’s Background Sync API and the retry strategy it uses to perform a request. By allowing synchronization in the background after a first attempt has failed, the API helps us improve the browsing experience for users who go offline or are on unstable connections.

UI/UX

Cheating At Design
Use color and weight to create visual hierarchy instead of size is only one of the seven practical tips for cheating at design that Adam Wathan and Steve Schoger share. (Image credit)

Security

  • With GraphQL you can query exactly what you want whenever you want. This is amazing for working with an API but also has complex security implications. Instead of asking for legitimate, useful data, a malicious actor could submit an expensive, nested query to overload your server, database, network, or all of these. To prevent this from happening, Max Stoiber shows us how we can secure the GraphQL API in our projects.

Privacy

  • WebKit is introducing the Storage Access API. The new API targets one of the major issues with Safari’s Intelligent Tracking Protection (ITP): Identifying users who are logged in to a first-party service but view content of it embedded on a third party (YouTube videos on a blog, for example). The Storage Access API allows third-party embeds to request access to their first-party cookies when the user interacts with them. A good solution to protect user privacy by default and allow exceptions on request.

Web Performance

  • Janos Pasztor built his own Content Delivery Network because he thinks it can be a better solution than using existing third parties. The code for the CDN of his personal website is now available on Github. A nice web performance article that looks at common solutions from a different angle.
  • A year after Facebook’s announcement to broadly use Cache-Control: Immutable, Paul Calvano examined how widespread its usage is on the web — apart from the few big players. Interesting research and it’s still sad to see that this useful performance tool is used so little. At Colloq, we use it quite a lot, which saves us a lot of traffic and load on our servers and enables us to serve a lot of pages nearly instantly to recurring users.
Global stats of a self-built CDN
Global stats for the custom CDN that Janos Pasztor built. (Image credit)

HTML & SVG

JavaScript

CSS

Accessibility

Accessibility Checklist
The Accessibility Checklist helps build accessibility into your process no matter your role or stage in a project. (Image credit)

Work & Life

  • This week I read an article by Alex Duloz, and his words still stick with me: “When we develop a new application, when we post content on the Internet, whatever we do that people will have access to, we should consider just for a minute if our contribution adds up to the level of dumbness kids/teenagers are exposed to. If it does, we should refrain from going live.” The truth is, most of us, including me, don’t consider this before posting on the Internet. We create funny things, share funny pictures and try to get fame with silly posts. But in reality, we shape society with this. Let’s try to provide more useful resources and make the consumption of this more enjoyable so young people can profit from our knowledge and not only view things we think are funny. “We should always consider how teenagers will use what we release.”
  • The MIT OpenCourseWare released a lot of free audio and video lectures. This is amazing news and makes great content available to broader masses.
  • Jake Knapp says great work requires idealism and cynicism and has strong arguments to back up this theory. An article worth reading.
  • There’s an important article on how unhappiness has grown in America’s population since around the year 2000. It reveals that while income inequality might play a role, the more important aspect is that young people who use a lot of digital media are unhappier than those who use it only up to an hour a day. Interestingly, people who don’t use digital media at all, are unhappy, too, so the outcome of this could be that we should try to use digital media only moderately — at least in our private lives. I bet it’ll make a big difference.
  • Following the theory of Michael Bradley, projects don’t necessarily need a roadmap for success. Instead, he suggests to create a moral compass that points out why the project exists and what its purpose is.

Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for April 13th. Stay tuned.

Excerpt from – 

Monthly Web Development Update 3/2018: Service Workers, Building A CDN, And Cheating At Design

How To Make A WordPress Plugin Extensible

Have you ever used a plugin and wished it did something a bit differently? Perhaps you needed something unique that was beyond the scope of the settings page of the plugin.

I have personally encountered this, and I’m betting you have, too. If you’re a WordPress plugin developer, most likely some of your users have also encountered this while using your plugin.

Here’s a typical scenario: You’ve finally found that plugin that does everything you need — except for one tiny important thing. There is no setting or option to enable that tiny thing, so you browse the documentation and find that you can’t do anything about it. You request the feature in the WordPress plugin’s support forum — but no dice. In the end, you uninstall it and continue your search.

Imagine if you were the developer of this plugin. What would you do if a user asked for some particular functionality?

The ideal thing would be to implement it. But if the feature was for a very special use case, then adding it would be impractical. It wouldn’t be good to have a plugin setting that only 0.1% of your users would have a use for.

You’d only want to implement features that affect the majority of your users. In reality, 80% of users use 20% of the features (the 80/20 rule). So, make sure that any new feature is highly requested, and that 80% of your users would benefit from it, before implementing it. If you created a setting for every feature that is requested, then your plugin would become complicated and bloated — and nobody wants that.

Your best bet is to make the plugin extensible, code-wise, so that other people can enhance or modify it for their own needs.

In this article, you’ll learn about why making your plugin extensible is a good idea. I’ll also share a few tips of how I’ve learned to do this.

What Makes A Plugin Extensible?

In a nutshell, an extensible plugin means that it adheres to the “O” part of the SOLID principles of object-oriented programming — namely, the open/closed principle.

If you’re unfamiliar with the open/closed principle, it basically means that other people shouldn’t have to edit your code in order to modify something.

Applying this principle to a WordPress plugin, it would mean that a plugin is extensible if it has provisions in it that enable other people to modify its behavior. It’s just like how WordPress allows people to “hook” into different areas of WordPress, but at the level of the plugin.

A Typical Example Of A Plugin

Let’s see how we can create an extensible plugin, starting with a sample plugin that isn’t.

Suppose we have a plugin that generates a sidebar widget that displays the titles of the three latest posts. At the heart of the plugin is a function that simply wraps the titles of those three posts in list tags:


function get_some_post_titles() 
  $args = array(
      'posts_per_page' => 3,
  );

  $posts = get_posts( $args );

  $output = '
    '; foreach ( $posts as $post ) $output .= '
  • ' . $post->post_title . '
  • '; $output .= '
'; return $output; }

While this code works and gets the job done, it isn’t quite extensible.

Why? Because the function is set in its own ways, there’s no way to change its behavior without modifying the code directly.

What if a user wanted to display more than three posts, or perhaps include links with the posts’ titles? There’s no way to do that with the code above. The user is stuck with how the plugin works and can nothing to change it.

Including A Hundred Settings Isn’t The Answer

There are a number of ways to enhance the plugin above to allow users to customize it.

One such way would be to add a lot of options in the settings, but even that might not satisfy all of the possibilities users would want from the plugin.

What if the user wanted to do any of the following (scenarios we’ll revisit later):

  • display WooCommerce products or posts from a particular category;
  • display the items in a carousel provided by another plugin, instead of as a simple list;
  • perform a custom database query, and then use those query’s posts in the list.

If we added a hundred settings to our widget, then we would be able to cover the use cases above. But what if one of these scenarios changes, and now the user wants to display only WooCommerce products that are currently in stock? The widget would need even more settings to accommodate this. Pretty soon, we’d have a gazillion settings.

Also, a plugin with a huge list of settings isn’t exactly user-friendly. Steer away from this route if possible.

So, how would we go about solving this problem? We’d make the plugin extensible.

Adding Our Own Hooks To Make It Extensible

By studying the plugin’s code above, we see a few operations that the main function performs:

  • It gets posts using get_posts.
  • It generates a list of post titles.
  • It returns the generated list.

If other people were to modify this plugin’s behavior, their work would mostly likely involve these three operations. To make our plugin extensible, we would have to add hooks around these to open them up for other developers.

In general, these are good areas to add hooks to a plugin:

  • around and within the major processes,
  • when building output HTML,
  • for altering post or database queries,
  • before returning values from a function.

A Typical Example Of An Extensible Plugin

Taking these rules of thumb, we can add the following filters to make our plugin extensible:

  • add myplugin_get_posts_args for modifying the arguments of get_posts,
  • add myplugin_get_posts for overriding the results of get_posts,
  • add myplugin_list_item for customizing the generation of a list entry,
  • add myplugin_get_some_post_titles for overriding the returned generated list.

Here’s the code again with all of the hooks added in:


function get_some_post_titles() 
  $args = array(
      'posts_per_page' => 3,
  );

  // Let other people modify the arguments.
  $posts = get_posts( apply_filters( 'myplugin_get_posts_args', $args ) );

  // Let other people modify the post array, which will be used for display.
  $posts = apply_filters( 'myplugin_get_posts', $posts, $args );

  $output = '
    '; foreach ( $posts as $post ) // Let other people modify the list entry. $output .= '
  • ' . apply_filters( 'myplugin_list_item', $post->post_title, $post ) . '
  • '; $output .= '
'; // Let other people modify our output list. return apply_filters( 'myplugin_get_some_post_titles', $output, $args ); }

You can also get the code above in the GitHub archive.

I’m adding a lot of hooks here, which might seem impractical because the sample code is quite simple and small, but it illustrates my point: By adding just four hooks, other developers can now customize the plugin’s behavior in all sorts of ways.

Namespacing And Context For Hooks

Before proceeding, note two important things about the hooks we’ve implemented:

  • We’re namespacing the hooks with myplugin_.
    This ensures that the hook’s name doesn’t conflict with some other plugin’s hook. This is just good practice, because if another hook with the same name is called, it could lead to unwanted effects.
  • We’re also passing a reference to $args in all of the hooks for context.
    I do this so that if others use this filter to change something in the flow of the code, they can use that $args parameter as a reference to get an idea of why the hook was called, so that they can perform their adjustments accordingly.

The Effects Of Our Hooks

Remember the unique scenarios I talked about earlier? Let’s revisit those and see how our hooks have made them possible:

  • If the user wants to display WooCommerce products or posts from a particular category, then either they can use the filter myplugin_get_posts_args to add their own arguments for when the plugin queries posts, or they can use myplugin_get_posts to completely override the posts with their own list.
  • If the user wants to display the items in a carousel provided by another plugin, instead of as a simple list, then they can override the entire output of the function with myplugin_get_some_post_titles, and instead output a carousel from there.
  • If the user wants to perform a custom database query and then use that query’s posts in the list, then, similar to the first scenario, they can use myplugin_get_posts to use their own database query and change the post array.

Much better!

A Quick Example Of How To Use Our Filters

Developers can use add_filter to hook into our filters above (or use add_action for actions).

Taking our first scenario above, a developer can just do the following to display WooCommerce products using the myplugin_get_posts_args filter that we created:

add_filter( 'myplugin_get_posts_args', 'show_only_woocommerce_products' );
function show_only_woocommerce_products( $args ) 
   $args['post_type'] = 'product';
   return $args;

We Can Also Use Action Hooks

Aside from using apply_filters, we can also use do_action to make our code extensible. The difference between the two is that the first allows others to change a variable, while the latter allows others to execute additional functionality in various parts of our code.

When using actions, we’re essentially exposing the plugin’s flow to other developers and letting them perform other things in tandem.

It might not be useful in our example (because we are only displaying a shortcode), but it would be helpful in others. For example, given an extensible backup plugin, we could create a plugin that also uploads the backup file to a third-party service such as Dropbox.

“Great! But Why Should I Care About Making My Plugin Extensible?”

Well, if you’re still not sold on the idea, here are a few thoughts on why allowing other people to modify your plugin’s behavior is a good idea.

It Opens Up the Plugin to More Customization Possibilities

Everyone has different needs. And there’s a big chance your plugin won’t satisfy all of them, nor can you anticipate them. Opening up your plugin to allow for modifications to key areas of your plugin’s behavior can do wonders.

It Allows People to Introduce Modifications Without Touching the Plugin’s Code

Other developers won’t be forced to change your plugin’s files directly. This is a huge benefit because directly modifying a plugin’s file is generally bad practice. If the plugin gets updated, then all of your modifications will be wiped.

If we add our own hooks for other people to use, then the plugin’s modifications can be put in an external location — say, in another plugin. Done this way, the original plugin won’t be touched at all, and it can be freely updated without breaking anything, and all of the modifications in the other plugin would remain intact.

Conclusion

Extensible plugins are really awesome and give us room for a lot of customization possibilities. If you make your plugin extensible, your users and other developers will love you for it.

Take a look at plugins such as WooCommerce, Easy Digital Downloads and ACF. These plugins are extensible, and you can easily tell because numerous other plugins in WordPress’ plugins directory add functionality to them. They also provide a wide array of action and filter hooks that modify various aspects of the plugins. The rules of thumb I’ve enumerated above have come up in my study of them.

Here are a few takeaways to make your plugin extensible:

  • Follow the open/closed principle. Other people shouldn’t have to edit your code in order to modify something.
  • To make your plugin extensible, add hooks in these places:
    • around and within major processes,
    • when building the output HTML,
    • for altering post or database queries,
    • before returning values from a function.
  • Namespace your hooks’ names with the name of your plugin to prevent naming conflicts.
  • Try passing other variables that are related to the hook, so that other people get some context of what’s happening in the hook.
  • Don’t forget to document your plugin’s hooks, so that other people can learn of them.

Further Reading

Here are some resources if you want to learn more about extending plugins:

Smashing Editorial
(mc, ra, al, yk, il)

Original post:  

How To Make A WordPress Plugin Extensible

A Comprehensive Guide To User Testing

(This is a sponsored article.) With a prototype of your design built, it’s important to start testing it to see if the assumptions you have made are correct. In this article, the seventh in my ongoing series exploring the user experience design process, I’ll explore the importance of user testing.
As I explored in my earlier article on research, where I explored the research landscape, there are many different types of research methods you can use, and there are a variety of different user tests you can run, including:

Link:  

A Comprehensive Guide To User Testing

How To Streamline WordPress Multisite Migrations With MU-Migration

Migrating a standalone WordPress site to a site network (or “multisite”) environment is a tedious and tricky endeavor, the opposite is also true. The WordPress Importer works reasonably well for smaller, simpler sites, but leaves room for improvement. It exports content, but not site configuration data such as Widget and Customizer configurations, plugins, and site settings. The Importer also struggles to handle a large amount of content. In this article, you’ll learn how to streamline this type of migration by using MU-Migration, a WP-CLI plugin.

Link: 

How To Streamline WordPress Multisite Migrations With MU-Migration

How To Make A Drag-and-Drop File Uploader With Vanilla JavaScript

It’s a known fact that file selection inputs are difficult to style the way developers want to, so many simply hide it and create a button that opens the file selection dialog instead. Nowadays, though, we have an even fancier way of handling file selection: drag and drop.
Technically, this was already possible because most (if not all) implementations of the file selection input allowed you to drag files over it to select them, but this requires you to actually show the file element.

Read this article:  

How To Make A Drag-and-Drop File Uploader With Vanilla JavaScript

“Maybe Later” – A New Interaction Model for Ecommerce Entrance Popups

I’d guess that over half of the e-commerce stores I visit use entrance popups to advertise their current deal. Most often it’s a discount.

What is an Entrance Popup and What’s Wrong With Them?

They are as they sound. A popup that appears as soon as you arrive on the site. They’re definitely the most interruptive of all popups because you’ve not even had a chance to look around.

I get why they are used though because they work really well at one thing – letting you know that an offer exists, and what it is. And given high levels of competition for online dollars, it makes sense why they would be so prolific.

The intrusion isn’t the only point of frustration. There’s also the scenario where you arrive on a site, see an offer appear, you find it interesting and potentially very valuable (who doesn’t want 50% off?), but you want to do some actual looking around – the shopping part – before thinking about the offer. And when you’re forced to close the popup in order to continue, it’s frustrating because you want the offer! You just don’t want it right now.

So, given the fact that they are so common, and they’re not going anywhere anytime soon, and they create these points of frustration, I’ve been working on developing a few alternative ways to solve the same problem.

The one I want to share with you today is called “Maybe Later”.

“Maybe Later” is a Solution to Increase Engagement and Reduce Frustration

As you saw in the header image, instead of the now classic YES/NO popup – the one that gets abused by shady marketers (Technology isn’t the Problem, We Are.) – “Maybe Later” includes a third option called, you guessed it!

The middle option gives some control back to the shopper.

It’s more than just a third button, here’s how it works (I’ll refer to the sketch opposite):

  1. The popup appears when you enter the site. You can choose “No” to get rid of it, “Yes” to take advantage of it, or “Maybe Later” to register your interest but get it out of your way.
  2. When you click “Maybe Later” a cookie is set to log your interest.
  3. Now while you are browsing the rest of the site, a Sticky Bar – targeted at the cookie that was set – appears at the bottom (or top) of the page, with a more subtle reminder of the offer, so that you know it there and ready if you decide to take advantage of it.
  4. If you decide against the offer, you can click “No thanks” on the Sticky Bar, the cookie is deleted, and the offer is hidden for good.

The core purpose of this idea is to put the control back with the shopper while creating an effective method for the retailer to engage with you, with your permission.


Follow our Product Awareness Month journey >> click here to launch a popup with a subscribe form (it uses our on-click trigger feature).


Visual Hierarchy on Popup Buttons

When you have more than a single button, it’s important to establish visual design cues to indicate how the hierarchical dominance plays out. For you as a marketer, the most important of the three buttons is YES, MAYBE LATER is second, and NO is the least.

You can create a better user experience for your visitors by using the correct visual hierarchy and affordance when it comes to button design. In the image below, there is a progression of visual dominance from left to right (which is the correct direction – in Western society). Left is considered a backward step (in online interaction design terms), and right is a progression to the goal.

From left to right we see:

  • The NO button: is designed as a ghost button which has the least affordance and weight of the three.
  • The MAYBE LATER button: gains some solidity by increasing the opacity
  • The YES button: has a fully opaque design represented by the primary call to action colour of the theme.

You can achieve a similar level of dominance by making the secondary action a link instead of a button, which is a great visual hierarchy design technique. What I don’t like is when people do this, but they make the “No thanks” link really tiny. If you’re going to provide an option, do it with a little dignity and make it easy to see and click.

See the “Maybe Later” Popup-to-Sticky-Bar Model in Action

Alrighty, demo time! I have a few instructions for you to follow to see it in action. I didn’t load the popup on this page as it’s supposed to be an entrance popup and I needed to set the scene first. But I’ll use some trickery to make it happen for you.

Follow these instructions and you’ll see “Maybe Later” in action:

Please note: this is desktop only. Reason being is that Google dislikes entrance popups on mobile. Sticky Bars are the Google-friendly way to present promos on mobile, so they work, but the combo isn’t appropriate.

  1. Visit this page (opens in new window).
  2. Click the “Maybe Later” button and the popup will close.
  3. Refresh that page and you’ll see a Sticky Bar with the same offer appear at the bottom.
  4. Come back to this page.
  5. Refresh this page and you’ll see the Sticky Bar here too.
  6. Click “No thanks” to get rid of it when you’ve had enough :D

Here’s the entrance Popup you will see:

Maybe Later - A New  interaction model for ecommerce entrance popups

And the Sticky Bar you will see following that:

How to Use “Maybe Later” on Your Website

If you’d like to give it a try, follow the instructions below in your Unbounce account. (You should sign up for Unbounce if you haven’t already: you get Landing Pages, Popups, and Sticky Bars all in the same builder).

You can also see what Popups and Sticky Bars look like on your website by entering your URL on our new Live Preview Tool.

“Maybe Later” Setup Instructions

Caveat: This is not an official Unbounce feature, and as such is not technically supported. But it is damn cool. And if enough people scream really hard, maybe I’ll be able to persuade the product team to add it to the list. And please talk to a developer before trying this in a production environment.

Step 1: Create a Popup in Unbounce

Step 2: Add “Maybe Later” Script to the Popup

In the “Javascripts” window located in the bottom-left.

Add the following script “Before the body end tag”, replacing “lp-pom-button-50” with the id of your “Maybe Later” button, and unbounce.com with your own domain.

document.getElementById("lp-pom-button-50").onclick = function() 
parent.postMessage(JSON.stringify('later'), 'https://unbounce.com');

Step 3: Set URL Targeting on Popup

Set up the URL targeting for where you want the popup to appear. I chose the post you’re reading right now (with a ?demo extension so it would only fire when I sent you to that URL).

Step 4: Set Cookie Targeting on Popup

Set up the cookie targeting to “Not show” when the “Maybe Later” cookie is present. The cookie is set when the button is clicked. (You’ll see how in step 9).

Step 5: Create a Sticky Bar in Unbounce

Step 6: Add “Maybe Later” Script to the Sticky Bar

Add the following script “Before the body end tag”, replacing “lp-pom-button-45” with the id of your “No Thanks” button, and unbounce.com with your own domain.

document.getElementById("lp-pom-button-45").onclick = function() 
parent.postMessage(JSON.stringify('laterForget'), 'https://unbounce.com');

Step 7: Set URL Targeting on Sticky Bar

Set up the URL targeting for where you want the Sticky Bar to appear. This might be every page on your e-commerce site, or in my case just this post and another for testing.

Step 8: Set Cookie Targeting on Sticky Bar

Set the Trigger to “Arrival”, Frequency to “Every Visit”, and Cookie Targeting to show when the cookie we’re using is set. (You’ll see how it’s set in the next step).

Step 9: Add “Maybe Later” Code to Your Website

This is some code that allows the Popup and Sticky Bar to “talk” to its host page and set/delete the cookie.

// On receiving message from the popup set a cookie
window.onload = function() 
function receiveMessage(e) 
var eventData = JSON.parse(e.data);
// Check for the later message
if (eventData === 'later') 
document.cookie = "mlshowSticky=true; expires=Thu, 11 May 2019 12:00:00 UTC; path=/";

if (eventData === 'laterForget') 
document.cookie = "mlshowSticky=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/;";

}
// Listen for the message from the host page
window.addEventListener('message', receiveMessage);
}

Step 10: Enjoy Being Awesome

That’s all, folks!


What Do You Think?

I’d love to know what you think about this idea in the comments, so please jump in with your thoughts and ideas.

Later (maybe),
Oli

p.s. Don’t forget to see what Popups and Sticky Bars look like on your website with the new Live Preview Tool

View original: 

“Maybe Later” – A New Interaction Model for Ecommerce Entrance Popups

Thumbnail

Maximizing The Design Sprint

Following a summer of Wonder Woman, Spiderman, and other superhero blockbusters, it’s natural to fantasize about having a superpower of your own. Luckily for designers, innovators, and customer-centric thinkers, a design sprint allows you to see into the future to learn in just five days what customers think about your finished product.
As a UX consultant and in-house design strategist, I have facilitated dozens upon dozens of design workshops (ranging from rapid prototyping sessions to, of course, sprints).

View post: 

Maximizing The Design Sprint