Tag Archives: king

Thumbnail

Visual Studio Live Share Can Do That?




Visual Studio Live Share Can Do That?

Burke Holland



A few months ago, Microsoft released its free Visual Studio (VS) Live Share service. VS Live Share is Google Docs level collaboration for code. Multiple developers can collaborate on the same file at the same time without ever leaving their own editor.

After the release of Live Share, I realized that many of us have resigned ourselves to being isolated in our code and we’re not even aware that there are better ways to work with a service like VS Live Share. This is partly because we are stuck in old habits and partly because we just aren’t aware of what all VS Live Share can do. That last part I can help with!

In this article, we’ll go over the features and best practices for VS Live Share that make developer collaboration as easy as being an “Anonymous Hippo.”


list of anonymous animals in Google Docs


Google Docs has an interesting way of handling anonymous participants (Large preview)

Share Your Code

Live Share comes as an extension for both Visual Studio and Visual Studio Code (VS Code). In this article, we’re going to focus on VS Code.


vs code live share extension readme page


(Large preview)

You can also install it via the VS Live Share Extension Pack, which includes the following extensions, all of which we are going to cover in this article…

  • VS Live Share
  • VS Live Share Audio
  • Slack Chat extension

Once the extension is installed, you will need to log in to the VS Live Share service. You can do that by opening the Command Palette Ctrl/Cmd + Shift + P and select “Sign In With Browser”. If you don’t log in and you try and start a new sharing session, you will be prompted to log in at that time.


vs code command palette showing option to sign in with browser


Use the VS Code Command Palette to start a new Live Share session (Large preview)

There are several ways to kick off a VS Live Share session. You can do it from the Command Palette, you can click that “Share” button in the bottom toolbar, or you can use the VS Live Share explorer view in the Sidebar.


vs code with boxes drawn around the different parts of the UI that can be used to start a live share session


There are a myriad of ways to start a new VS Live Share session (Large preview)

A link is copied to your clipboard. You can then send that link to others, and they can join your Live Share session — provided they are using VS Code as well. Which, aren’t we all?

Now you can collaborate just like you were working on a regular old Word document:

The other person can not only see your code, but they can edit it, save it, execute it and even debug it. For you, they show up as a cursor with a name on it. You show up in their editor the same way.

The VS Live Share Explorer

The VS Live Share explorer shows up as a new icon in the Action Bar — which is that bar of icons on the far right of my screen (the far left of yours for default Action Bar placement). This is a sort of “ground zero” for everything VS Live Share. From here, you can start sessions, end them, share terminals, servers, and see who is connected.


vs live share viewlet


The VS Live Share Explorer is a heads-up view of all things Live Share (Large preview)

It’s a good idea to bind a keyboard shortcut to this VS Live Share Explorer view so that you can quickly toggle between that and your files. You can do this by pressing Ctrl/Cmd + K (or Ctrl/Cmd + S) and then searching for “Show Live Share”. I bound mine to Ctrl/Cmd + L, which doesn’t seem to be bound to anything else. I find this shortcut to be intuitive (L for Live Share) and easy to hit on the keyboard.


the keyboard binding screen in vs code with a binding created for the vs live share viewlet


You can create a binding for the VS Live Share Explorer viewlet (Large preview)

Share Code Read-Only

When you start a new sharing session, you will be notified thusly and asked if you would like to share your workspace read-only. If you select read-only, people will be able to see your code and follow your movements, but they will not be able to interact.


vs code notification prompting user to choose read-only sharing


Sharing sessions are read-write by default, but you can make them read-only (Large preview)

This mode is useful when you are sharing with someone that you don’t necessarily trust — maybe a vendor, partner or an estranged ex.

It’s also particularly useful for instructors. Note that at the time of this writing, VS Live Share is locked to 5 concurrent users. Since you probably are going to want more than that in read-only mode, especially if you’re teaching a group, you can up the limit to 30 by adding the following line to your User Settings file: Ctrl/Cmd + ,.

"liveshare.features": "experimental"

Change The Default Join Behavior

Anyone with the link can join your Live Share session. When they join, you’ll see a pop-up letting you know. Likewise, when they disconnect, you get notified. This is the default behavior for VS Live Share.


vs code notification with the name of the person who has joined the live share session


VS Code will alert you whenever someone joins your session (Large preview)

It’s a good idea to change this so that you have to manually approve someone before they can join your session. This is to protect you in the case where you go to lunch and forget to disconnect your session. Your co-workers can’t log back in, change one letter in your database connection string and then laugh while you spend the next four hours trying to figure out how your life has gone so horribly wrong.

To enable this, add the following line to your User Settings file Ctrl/Cmd + ,.

"liveshare.guestApprovalRequired": true

Now you’ll be prompted when someone wants to join. If you block someone, they are blocked for the duration of the session. If they try to join again, you won’t be notified and they will be unceremoniously rejected by VS Live Share.

Go and enjoy your lunch. Your computer is safe.

Focus Followers

By default, anyone who joins your Live Share session is “following” you. That means that their editor will load up whatever file you are in and scroll whenever you scroll. Even if you switch files, participants will see exactly what you see.

The second that a person makes changes to a file, they are no longer following you. So if you are both working on a file together, and then you go to a different file, they won’t automatically go with you. That can lead to a lot of confusion with you talking about code in the file you’re in while the other person is looking at something entirely different.

Besides just telling each other where you are (which works, btw), there is a handy command called “Focus Participants” that is in the Command Palette Ctrl/Cmd + Shift + P.


vs code command palette showing live share focus command


Access the “focus” command from the VS Code Command Palette (Large preview)

You can also access it as an icon in the VS Live Share Explorer view.


vs code live share explorer focus icon


Send a follow request by clicking the follow icon in the VS Live Share Explorer viewlet (Large preview)

This will focus your participants on the next thing you click on or scroll to. By default, VS Live Share focus requests are accepted implicitly. If you don’t want people to be able to focus you, you can add the following line to your User Settings file.

"liveshare.focusBehavior": "prompt"

Also note that you can follow participants. If you click on their name in the VS Live Share Explorer view, you will begin to follow them.

Because following is turned off as soon as the other person begins editing code, it can be tough to know exactly when people are following you and when they aren’t. One place you can look is in the VS Live Share Explorer view. It will tell you the file that a person is in, but not whether or not they are following you.

A good practice is to just remember that focus is always changing so people may or may not see what you see at any given time.

Debug As A Team

Participants can share any debug sessions that you run. If you start a debug session, they will get the exact same experience that you do. If it breaks on your side, it breaks on theirs, and they get the full debug view into all of your code.

They can step in, out, over, add watches, evaluate in the Debug Console; any debugging that you can do, they can do too, and they can control it.

Debugging can also be launched by participants. Be default, though, VS Code does not allow your debugger to be started remotely. To enable this, add the following line to your User Settings file Ctrl/Cmd + ,:

"liveshare.allowGuestDebugControl": true

Share Your Terminal

A lot of the work we do as developers isn’t in our code; it’s in the terminal. Some days it seems like I spend about as much time on my terminal as I do in my editor. This means that if you have an error on your terminal or need to type some command, it would be nice if your participants in VS Live Share can see your terminal in addition to your code.

VS Code has an integrated terminal, and you can share it with VS Live Share.


vs code command palette with share terminal selected


Access the “Share Terminal” command from the VS Code Command Palette (Large preview)

When you do this, you have the opportunity to share your terminal as read-only, or as read-write.


vs code prompting to share terminal as read-only or read-write


Always share your terminal read-only unless you absolutely have to share it with write access (Large preview)

By default, you should be sharing your terminal as read-only. When you share your terminal read-write, the user can execute arbitrary commands directly on your terminal. Let that sink in for a moment. That’s heavy.

It goes without saying that having remote write access to someone’s terminal comes with a lot of trust and responsibility. You should only ever share your terminal read-write with people that you trust implicitly. Estranged ex’s are probably off the table.

Sharing your terminal read-only safely allows the person on the other end of the line to see what you are typing and your terminal output in real time, but restricts them from typing anything into that terminal.

Should you find yourself in a scenario where it would be quicker for the other person to just get at your terminal instead of trying to walk you through some wacky command with a ton of flags, you can share your terminal read-write. In this mode, the other person has full remote access to your terminal. Choose your friends wisely.

Share Your localhost

In the video above, the terminal command ends with a link to a site running on http://localhost:8080. With VS Live Share, you can share that localhost so that the other person can access it just like it was their own localhost.

If you are running a shared debug session, when the participant hits that localhost URL on their end, it will break for both of you if a breakpoint is hit. Even better, you can share any TCP process. That means that you can share something like a database or a Redis cache. For instance, you could share your local Mongo DB server. Seriously! This means no more changing config files or trying to get a shared database up. Just share the port for your local Mongo DB instance.

Share The Right Files The Right Way

Sometimes you don’t want collaborators to see certain files. There are likely private keys and passwords in your project that are not checked into source control and not suitable for public viewing. In this case, you would want to hide those files from anyone participating in your Live Share session.

By default, VS Live Share will hide any file that is specified in your .gitignore. If there is a file that you want to hide, just add it to your .gitignore. Note though, that this only hides the file in the project view. If you are in a shared debugging session and you step into a file that is in the .gitignore, it is still loaded up in the editor and your collaborators will be able to see it.

You can get more fine-grained control over how you share files by creating a .vsls.json file.

For instance, if you wanted to make sure that any files that are in the .gitignore are never visible, even during debugging, you can set the gitignore property to exclude.


    "$schema": "http://json.schemastore.org/vsls",
    "gitignore":"exclude"

Likewise, you could show everything in your .gitignore and control file visibilty directly from the .vsls.json file. To do that, set the gitignore to none and then use the excludeFiles and hideFiles properties. Remember — exclude means never visible, and hide means “not visible in the file explorer.”


    "$schema": "http://json.schemastore.org/vsls",
    "gitignore":"none",
    "excludeFiles":[
        "*.env"
    ],
    "hideFiles": [
        "dist"
    ]

Sharing And Extensions

Part of the appeal of VS Code to a lot of developers is the massive extensions marketplace. Most people will have more than a few installed. It’s important to understand how extensions will work, or not work, in the context of VS Live Share.

VS Live Share will synchronize anything that is specific to the context of the project you are sharing. For instance, if you have the Vetur extension installed because you are working with a Vue project, it will be shared across to any participants — regardless of whether or not they have it installed as well. The same is true for other context-specific things, like linters, formatters, debuggers, and language services.

VS Live Share does not synchronize extensions that are user specific. These would be things like themes, icons, keyboard bindings, and so on. As a general rule of thumb, VS Live Share shares your context, not your screen. You can consult the official docs article on this subject for a more in-depth explanation of what extensions you can expect to be shared.

Communicate While You Collaborate

One of the first things people do on their inaugural VS Live Share experience is to try to communicate by typing in code comments. This seems like the write (get it?) thing to do, but not really how VS Live Share was designed to be used.

VS Live Share is not meant to replace your chat client of choice. You likely already have a preferred chat mechanism, and VS Live Share assumes that you will continue to use that.

If you’re already using Slack, there is a VS Code extension called Slack Chat. This extension is still a tad early in its development, but it looks quite promising. It puts VS Code in split mode and embeds Slack on the right-hand side. Even better, you can start a Live Share session directly from the Slack chat.


vs code slack chat extension


The Slack Chat extension puts Slack inside of your editor (Large preview)

Another tool that looks quite interesting is called CodeStream.

CodeStream

While VS Live Share looks to improve collaboration from the editor, CodeStream is aiming to solve that same problem from a chat perspective.

The CodeStream extension allows you to chat directly within VS Code and those chats become part of your code history. You can highlight a chunk of code to discuss and it goes directly into the chat so there is context for your comments. These comments are then saved as part of your Git repo. They also show up in your code as little comment icons, and these comments will show up no matter which branch you are on.

When it comes to VS Live Share, CodeStream offers a complimentary set of features. You can start new sessions directly from the chat pane, as well as by clicking on an avatar. New sessions automatically create a corresponding chat channel that you can persist with the code, or dispose of when you are done.

If chatting isn’t enough to get the job done, and you need to collaborate like it’s 1999, help is just a phone call away.

VS Live Share Audio

While VS Live Share isn’t trying to reinvent chat, it does re-invent your telephone. Kind of.

With the VS Live Share Audio extension, you can call someone directly and do voice chat from within VS Code.


vs code command palette showing start audio call option


Make audio calls from VS Code using the VS Live Share Audio extension (Large preview)

The other person will then get a prompt to join your call.


vs code notification asking if you would like to join the audio call


VS Code will ask you if you want to join an audio call that is in process (Large preview)

You will see a speaker icon in the bottom status bar when you are connected to a call. You can click on that speaker to change your audio device, mute yourself, or disconnect from the call.


vs code options showing options like mute and disconnect for live share audio extension


You have full control over audio settings when in a VS Live Share Audio call (Large preview)

The last tip I’ll give you is probably the most important, and it’s not a fancy feature or obscure setting you didn’t know existed.

Change Your Muscle Memory

We’ve got years of learned behavior when it comes to getting help or sharing our code. The state of developer collaboration tools has been so bad for so long that we are conditioned to paste code into Slack, start an awkward Skype calls that consist mostly of “tell me when you can see my screen”, or crowd around a monitor and point excessively, i.e. stock photo style.


a group of people pointing at a computer screen


(Large preview)

The most important thing you can do to get the most out of VS Live Share is to actually use VS Live Share. And it will have to be a “conscious” effort.

Your brain is good at patterns. You are constantly recognizing and classifying the world around you based on patterns you have identified, and you are so good at it, you don’t even realize you are doing it. You then develop default responses to these patterns. You form instincts. This is why you will default to the old ways of collaboration without even thinking about what you are doing. Before you know it you will be on a Skype call with someone sharing your screen — even if you have Live Share installed.

I’ve written a lot about VS Code and people will ask me from time to time how they can get more productive with their editor. I always say the same thing: the next time you reach for the mouse to do something, stop. Can you do that something with the keyboard instead? You probably can. Look up the shortcut and then make yourself use it. At first it’s going to be slower, but if you are willing to deliberately adopt a different behavior, you will be astonished at how fast your brain will default to the more productive way of doing something.

The same goes for Live Share. You will be on a call sharing your screen when it occurs to you that you could be using Live Share. At that moment, stop; click that “Share” button in the bottom of VS Code.

Yes, the person on the other end may not have the extension installed. Yes, it may take a moment to set it up. But if you work on establishing this behavior now, the next time you go to do this, it will “just work” and it won’t be long before you don’t even have to think about it, and at that point, you will finally have achieved that “Anonymous Hippo” level of collaboration.

More Resources

Smashing Editorial
(rb, ra, il)


See the article here:

Visual Studio Live Share Can Do That?

Thumbnail

Fixed Elements And Overlays In XD: Incredibly Easy And Fun Methods For Your Prototypes




Fixed Elements And Overlays In XD: Incredibly Easy And Fun Methods For Your Prototypes

Manuela Langella



(This article is kindly sponsored by Adobe.) A fixed element is an object you set to a fixed position on the artboard, allowing other items to scroll underneath. This way, you get a realistic simulation of scrolling on desktop and mobile. With the new overlay feature, you can simulate interactions such as lightbox effects and submenus.

How do famous brands use fixed elements and overlays? Well, let’s take a look at some examples to get some inspiration first.


Examples of brands using fixed elements and overlays


From left to right: 1) McDonald’s mobile home 2) A submenu slides up when you click on the hamburger menu. This is an example of an overlay. 3) Netflix’s Italian mobile website home screen. 4) Netflix sets its call to action as a fixed element. When you scroll down, the button stays fixed to the bottom of the screen. 5) Adobe mobile home 6) By clicking on the menu symbol, a submenu comes out as an overlay. (Large preview)

In this tutorial, we will learn how to set a menu bar as a fixed element and how to apply an overlay transition in a prototype, to simulate a menu opening from the click of a button. Both examples will be done in a mobile template, so that we can see our simulation in action directly on our mobile device. I’ve also included an Illustrator file with icons, which you can use to set up your examples quickly.

Let’s get started.

Preparing The Mobile Template

Open Adobe Xd, and choose the “iPhone 6/7/8 Plus” template. Then, go to File → Save As and choose a name to save your file (mine is mobile.xd).




(Large preview)

Let’s create a restaurant app in which people can select what to order from a list of food.

We will create two home layouts. The first one will be a long page, which we will use to see how fixed navigation works. The second will have a full-screen image, and the user will be able to click and open a menu bar that overlays the home screen.

To get started, click on the artboard icon on the left side, and click to the right of your current artboard. This will create a second identical artboard, near the first one.




(Large preview)

Let’s begin to design our elements, starting with the navigation bar. Click on the Rectangle tool (R) and draw a shape 414 pixels wide and 48 pixels tall. Set its color as #DE4F4F.




(Large preview)

I’ve prepared some icons in Illustrator to use in our layout. Just open the Illustrator file I’ve provided, and drag and drop the icons in your library, as shown below:

Large preview

In doing so, your icons will be automatically uploaded to your Adobe XD library, too.

To learn more about how to use libraries in different apps, read my earlier article, in which I go over some examples of how to add icons and elements to a library (in Illustrator, for instance) and then access them by opening that library in other apps (XD, in this case).

Once you have added the icons, open your XD library. You should see the icons in place:




(Large preview)

Drag and drop the icons on your artboard, as shown below. Position them, and make sure they are all about 25 pixels wide.




(Large preview)

Because we need our icons to be white, we have to modify these. We can directly modify them in the library, as demonstrated in my previous tutorial. With that done, we’ll see them updated in XD directly, without having to drag them from the library again.




(Large preview)

Now that the icons we want are in place, let’s create a logo. Let’s call this app “Gusto”. We’ll simply use the Text tool to add it. (I’m using the Leckerli One font here, but feel free to use whichever you like.) Align the logo to the middle of the navigation bar by clicking “Align center (horizontally)” in the right sidebar.




(Large preview)

Group all of the navigation elements together, and call the group “Menu”. To do this, select all elements in the left panel, right-click and choose “Group”.




(Large preview)




(Large preview)

Let’s add a beautiful hero image. I selected one from Pexels. Drag it on your artboard, and resize its height to 380 pixels.




(Large preview)

Now, click on Rectangle tool (R), and draw a rectangle the same size as the hero image, and place it on the image. Set a gradient for the rectangle’s color, using the values shown in the image below.




(Large preview)

(If you’d like more information about gradients, feel free to see my previous tutorial on how to apply them in XD.)

Insert some white text on the hero image and a circle for a button. Place a little circle with a number on the cart icon as well; we will need it later.




(Large preview)

Next, let’s increase the artboard’s height. We have to do that in order to insert new elements and to create the scrolling simulation.

After double-clicking on the artboard, set its height to 1265 pixels. Be sure that “Scrolling” is set to “Vertical” and that the “Viewport Height” is set to 736 pixels. A little blue marker will allow you to set the scrolling boundary towards the bottom of the artboard, as seen below:




(Large preview)

Let’s add in our content: Gusto’s mouthwatering menu. Click on the Rectangle tool (R) to create a rectangle for the picture that we will add.




(Large preview)

Drag and drop a picture directly into the box we just created; the image will automatically fit in it. Click on it once, and drag the little white circle from an angle inwards, in order to round all of the angles. Their values should be around 25, as shown in the picture below. Get rid of the border by unchecking the border value in the right sidebar.

Large preview

Click on the Text tool (T), and write a title on the right side of the image. I chose Lato as the font, at 14 pixels. Feel free to use another font, but maintain the 14-pixel size.




(Large preview)

Grab the Text tool (T) again, and write some lines for the description (Lato, 10 pixels) and for the price (Lato, 16 pixels).




(Large preview)

Take the Rectangle tool (R) and draw a rectangle of 100 by 30 pixels. Color it with the same orange we used on the button for the hero image; add the text “Add to Cart” with the Text tool (T); and add the cart icon from the library. All of these steps are covered in the short video below:

Finally, click on “Repeat Grid” to create a grid for this section. Once that’s done, we can change images and text easily, as shown in the video below:

If you want to learn more about how to create grids, follow my tutorial.

I used the following pictures from Pexels:

  • https://www.pexels.com/photo/close-up-of-food-247685/
  • https://www.pexels.com/photo/food-dinner-pasta-spaghetti-8500/
  • https://www.pexels.com/photo/selective-focus-photography-of-beef-steak-with-sauce-675951/
  • https://www.pexels.com/photo/food-plate-chocolate-dessert-132694/
  • https://www.pexels.com/photo/bread-food-sandwich-wood-62097/

Add some titles, descriptions and buttons.




(Large preview)

Finally, let’s add a rectangle for the footer, with the text “Gusto” in the center. Set the rectangle’s fill color to #211919.




(Large preview)

Yes! We’ve completed the first template design. Let’s set up our second template before we begin prototyping.

For our second mobile layout, just copy and paste the navigation and hero section from the first layout, and size the hero image to be full screen. Then, add a “Try Now” button to it.

In the short video below, I show you how to copy and paste elements into the second artboard, create a new button with the Rectangle tool (R) and write text on it with the Text tool (T).




(Large preview)

Excellent! Let’s move on and create our prototypes.

Setting Fixed Elements

We want to make the top navigation of our layout fixed, making it stick to its position as we scroll the artboard.

Click on your “Menu” group to select it, and select “Fixed Position” in the right sidebar.




(Large preview)

Important: In order for all elements to scroll under the menu, the menu should be on top of all other elements. Simply place the menu folder at the top, in the left sidebar.




(Large preview)

Now, to see your fixed navigation in action, simply click on the “Desktop Preview” button and try scrolling. You should see this:

Large preview

Tremendously simple, isn’t it?

Setting Overlay Elements

To see how overlays work in XD, we first need to create the elements that will be overlaid. When you click an item in the menu, what would you expect to happen? Exactly: A submenu should appear.

Let’s create three different submenus, like the ones in the image below, using the Rectangle tool (R). I chose a rectangle because the menu will overlay the screen, so it will cover not the whole artboard but just a part of it.

Follow the video below to see how I created the three overlay menus. You will see that I used the Rectangle tool (R), Line tool (L) and Text tool (T). We’re using rectangles to create the menu backgrounds because we need an object to overlay the screen. I’ve included the icons in the Adobe Illustrator file.

Below, you’ll see how I use “Repeat Grid” and how I modify elements inside of it.

Here is the final result:




(Large preview)

We will work on the second home layout at this point.

Set the visual mode to “Prototype”, selecting it from the top left of the screen.




(Large preview)

Next, double-click on the little hamburger menu icon, and drag and drop the little blue arrow onto the “Overlay 1” artboard. When the popup window appears, choose “Overlay” and “Slide right”. Then, click the “Desktop Preview” button to see it in action.

Large preview

Let’s do the same thing with the user icon and cart icon. Double-click on the user icon in Prototype mode, and drag and drop the little blue arrow onto the “Overlay 2” artboard. When the popup window appears, choose “Overlay” and “Slide left”. Then, click the “Desktop Preview” button to see it in action.

Large preview

Now, double-click on the cart icon in Prototype mode, and drag and drop the little blue arrow onto the “Overlay 3” artboard. When the popup windows appears, choose “Overlay” and “Slide left”. Click the “Desktop Preview” button again to see it work.

Large preview

We’re done! These great new features are super-easy to learn, and they’ll add a new level of interactivity simulation to your prototypes.

Quick tip: Want to preview the layout on your phone? Just upload your XD file to Creative Cloud, download the XD app for mobile, and open your document.

Here’s what we have learned in this tutorial:

  • set and create mobile layouts and elements,
  • set fixed elements,
  • use overlays to simulate a click-to-open submenu.

Where would you use fixed elements or overlays? Feel free to share your examples in the comments below!

Smashing Editorial
(il, yk)


Excerpt from: 

Fixed Elements And Overlays In XD: Incredibly Easy And Fun Methods For Your Prototypes

Thumbnail

Expert Brand Building Tips From Klaviyo’s Ecommerce Summit, Part Three

Klaviyo brand building

Welcome to Part Three of my blog series covering just a few of the things I learned as an attendee at Klaviyo: BOS – a wonderful two-day summit run by one of my favorite local startups. In Part One I focused on how ecommerce companies can attract and convert website traffic to their businesses with session recaps of “Using Google To Grow Your Online Store,” “SEO for Ecommerce,” and “You Got Them To Your Site – What Now?” In Part Two I shared email design tips for nurturing and retaining website leads from these two top-notch sessions: “Email A/B Testing: Beyond the…

The post Expert Brand Building Tips From Klaviyo’s Ecommerce Summit, Part Three appeared first on The Daily Egg.

Visit link: 

Expert Brand Building Tips From Klaviyo’s Ecommerce Summit, Part Three

Thumbnail

6 Things You Need to Know About CRO & Social Login

cro-social-login

Entrepreneurs are always trying to figure out how to engage their user more and boost their website’s conversion rate. One way to do that is through social sign-on, also referred to as social login or lazy sign-in. With social login, users can access your website using the social account IDs that they already have instead of setting up new login details for your website. And they don’t have to remember a new set of login credentials. Simply put, social login enhances a website’s user experience while allowing marketers to collect more accurate user data, including gender, age, interests, relationship status and a…

The post 6 Things You Need to Know About CRO & Social Login appeared first on The Daily Egg.

Link:

6 Things You Need to Know About CRO & Social Login

Thumbnail

Designing A Textbox, Unabridged




Designing A Textbox, Unabridged

Shane Hudson



Ever spent an hour (or even a day) working on something just to throw the whole lot away and redo it in five minutes? That isn’t just a beginner’s code mistake; it is a real-world situation that you can easily find yourself in especially if the problem you’re trying to solve isn’t well understood to begin with.

This is why I’m such a big proponent of upfront design, user research, and creating often multiple prototypes — also known as the old adage of “You don’t know what you don’t know.” At the same time, it is very easy to look at something someone else has made, which may have taken them quite a lot of time, and think it is extremely easy because you have the benefit of hindsight by seeing a finished product.

This idea that simple is easy was summed up nicely by Jen Simmons while speaking about CSS Grid and Piet Mondrian’s paintings:

“I feel like these paintings, you know, if you look at them with the sense of like ‘Why’s that important? I could have done that.’ It’s like, well yeah, you could paint that today because we’re so used to this kind of thinking, but would you have painted this when everything around you was Victorian — when everything around you was this other style?”

I feel this sums up the feeling I have about seeing websites and design systems that make complete sense; it’s almost as if the fact they make sense means they were easy to make. Of course, it is usually the opposite; writing the code is the simple bit, but it’s the thinking and process that goes into it that takes the most effort.

With that in mind, I’m going to explore building a text box, in an exaggeration of situations many of us often find ourselves in. Hopefully, by the end of this article, we can all feel more emphatic to how the journey from start to finish is rarely linear.

A Comprehensive Guide To User Testing

So you think you’ve designed something that’s perfect, but your test tells you otherwise. Let’s explore the importance of user testing. Read more →

Brief

We all know that careful planning and understanding of the user need is important to a successful project of any size. We also all know that all too often we feel to need to rush to quickly design and develop new features. That can often mean our common sense and best practices are forgotten as we slog away to quickly get onto the next task on the everlasting to-do list. Rinse and repeat.

Today our task is to build a text box. Simple enough, it needs to allow a user to type in some text. In fact, it is so simple that we leave the task to last because there is so much other important stuff to do. Then, just before we pack up to go home, we smirk and write:

<input type="text">

There we go!

Oh wait, we probably need to hook that up to send data to the backend when the form is submitted, like so:

<input type="text" name="our_textbox">

That’s better. Done. Time to go home.

How Do You Add A New Line?

The issue with using a simple text box is it is pretty useless if you want to type a lot of text. For a name or title it works fine, but quite often a user will type more text than you expect. Trust me when I say if you leave a textbox for long enough without strict validation, someone will paste the entire of War and Peace. In many cases, this can be prevented by having a maximum amount of characters.

In this situation though, we have found out that our laziness (or bad prioritization) of leaving it to the last minute meant we didn’t consider the real requirements. We just wanted to do another task on that everlasting to-do list and get home. This text box needs to be reusable; examples of its usage include as a content entry box, a Twitter-style note box, and a user feedback box. In all of those cases, the user is likely to type a lot of text, and a basic text box would just scroll sideways. Sometimes that may be okay, but generally, that’s an awful experience.

Thankfully for us, that simple mistake doesn’t take long to fix:

<textarea name="our_textbox"></textarea>

Now, let’s take a moment to consider that line. A <textarea>: as simple as it can get without removing the name. Isn’t it interesting, or is it just my pedantic mind that we need to use a completely different element to add a new line? It isn’t a type of input, or an attribute used to add multi-line to an input. Also, the <textarea> element is not self-closing but an input is? Strange.

This “moment to consider” sent me time traveling back to October 1993, trawling through the depths of the www-talk mailing list. There was clearly much discussion about the future of the web and what “HTML+” should contain. This was 1993 and they were discussing ideas such as <input type="range"> which wasn’t available until HTML5, and Jim Davis said:

“Well, it’s far-fetched I suppose, but you might use HTML forms as part of a game playing interface.”

This really does show that the web wasn’t just intended to be about documents as is widely believed. Marc Andreessen suggested to have <input type="textarea"> instead of allowing new lines in the single-line text type, [saying]: (http://1997.webhistory.org/www.lists/www-talk.1993q4/0200.html)

“Makes the browser code cleaner — they have to be handled differently internally.”

That’s a fair reason to have <textarea> separate to text, but that’s still not what we ended up with. So why is <textarea> its own element?

I didn’t find any decision in the mailing list archives, but by the following month, the HTML+ Discussion Document had the <textarea> element and a note saying:

“In the initial design for forms, multi-line text fields were supported by the INPUT element with TYPE=TEXT. Unfortunately, this causes problems for fields with long text values as SGML limits the length of attributea literals. The HTML+ DTD allows for up to 1024 characters (the SGML default is only 240 characters!)”

Ah, so that’s why the text goes within the element and cannot be self-closing; they were not able to use an attribute for long text. In 1994, the <textarea> element was included, along with many others from HTML+ such as <option> in the HTML 2 spec.

Okay, that’s enough. I could easily explore the archives further but back to the task.

Styling A <textarea>

So we’ve got a default <textarea>. If you rarely use them or haven’t seen the browser defaults in a long time, then you may be surprised. A <textarea> (made almost purely for multi-line text) looks very similar to a normal text input except most browser defaults style the border darker, the box slightly larger, and there are lines in the bottom right. Those lines are the resize handle; they aren’t actually part of the spec so browsers all handle (pun absolutely intended) it in their own way. That generally means that the resize handle cannot be restyled, though you can disable resizing by setting resize: none to the <textarea>. It is possible to create a custom handle or use browser specific pseudo elements such as ::-webkit-resizer.


The default <code>&lt;textarea&gt;</code> looks very small with a grey border and three lines as a resize handle.


A default textarea with no styling (Large preview)

It’s important to understand the defaults, especially because of the resizing ability. It’s a very unique behavior; the user is able to drag to change the size of the element by default. If you don’t override the minimum and maximum sizes then the size could be as small as 9px × 9px (when I checked Chrome) or as large as they have patience to drag it. That’s something that could cause mayhem with the rest of the site’s layout if it’s not considered. Imagine a grid where <textarea> is in one column and a blue box is in another; the size of the blue box is purely decided by the size of the <textarea>.

Other than that, we can approach styling a <textarea> much the same as any other input. Want to change the grey around the edge into thick green dashes? Sure here you go: border: 5px dashed green;. Want to restyle the focus in which a lot of browsers have a slightly blurred box shadow? Change the outline — responsibly though, you know, that’s important for accessibility. You can even add a background image to your <textarea> if that interests you (I can think of a few ideas that would have been popular when skeuomorphic design was more celebrated).

Scope Creep

We’ve all experienced scope creep in our work, whether it is a client that doesn’t think the final version matches their idea or you just try to squeeze in a tiny tweak and end up taking forever to finish it. So I ( enjoying creating the persona of an exaggerated project manager telling us what we need to build) have decided that our <textarea> just is not good enough. Yes, it is now multi-line, and that’s great, and yes it even ‘pops’ a bit more with its new styling. Yet, it just doesn’t fit the very vague user need that I’ve pretty much just thought of now after we thought we were almost done.

What happens if the user puts in thousands of words? Or drags the resize handle so far it breaks the layout? It needs to be reusable, as we have already mentioned, but in some of the situations (such as a ‘Twittereqsue’ note taking box), we will need a limit. So the next task is to add a character limit. The user needs to be able to see how many characters they have left.

In the same way we started with <input> instead of <textarea>, it is very easy to think that adding the maxlength attribute would solve our issue. That is one way to limit the amount of characters the user types, it uses the browser’s built-in validation, but it is not able to display how many characters are left.

We started with the HTML, then added the CSS, now it is time for some JavaScript. As we’ve seen, charging along like a bull in a china shop without stopping to consider the right approaches can really slow us down in the long run. Especially in situations where there is a large refactor required to change it. So let’s think about this counter; it needs to update as the user types, so we need to trigger an event when the user types. It then needs to check if the amount of text is already at the maximum length.

So which event handler should we choose?

  • change
    Intuitively, it may make sense to choose the change event. It works on <textarea> and does what it says on the tin. Except, it only triggers when the element loses focus so it wouldn’t update while typing.
  • keypress
    The keypress event is triggered when typing any character, which is a good start. But it does not trigger when characters are deleted, so the counter wouldn’t update after pressing backspace. It also doesn’t trigger after a copy/paste.
  • keyup
    This one gets quite close, it is triggered whenever a key has been pressed (including the backspace button). So it does trigger when deleting characters, but still not after a copy/paste.
  • input
    This is the one we want. This triggers whenever a character is added, deleted or pasted.

This is another good example of how using our intuition just isn’t enough sometimes. There are so many quirks (especially in JavaScript!) that are all important to consider before getting started. So the code to add a counter that updates needs to update a counter (which we’ve done with a span that has a class called counter) by adding an input event handler to the <textarea>. The maximum amount of characters is set in a variable called maxLength and added to the HTML, so if the value is changed it is changed in only one place.

var textEl = document.querySelector('textarea')
var counterEl = document.querySelector('.counter')
var maxLength = 200
    
textEl.setAttribute('maxlength', maxLength)
textEl.addEventListener('input', (val) => 
var count = textEl.value.length
counterEl.innerHTML = $count/$maxLength
})

Browser Compatibility And Progressive Enhancement

Progressive enhancement is a mindset in which we understand that we have no control over what the user exactly sees on their screen, and instead, we try to guide the browser. Responsive Web Design is a good example, where we build a website that adjusts to suit the content on the particular size viewport without manually setting what each size would look like. It means that on the one hand, we strongly care that a website works across all browsers and devices, but on the other hand, we don’t care that they look exactly the same.

Currently, we are missing a trick. We haven’t set a sensible default for the counter. The default is currently “0/200” if 200 were the maximum length; this kind of makes sense but has two downsides. The first, it doesn’t really make sense at first glance. You need to start typing before it is obvious the 0 updates as you type. The other downside is that the 0 updates as you type, meaning if the JavaScript event doesn’t trigger properly (maybe the script did not download correctly or uses JavaScript that an old browser doesn’t support such as the double arrow in the code above) then it won’t do anything. A better way would be to think carefully beforehand. How would we go about making it useful when it is both working and when it isn’t?

In this case, we could make the default text be “200 character limit.” This would mean that without any JavaScript at all, the user would always see the character limit but it just wouldn’t feedback about how close they are to the limit. However, when the JavaScript is working, it would update as they type and could say “200 characters remaining” instead. It is a very subtle change but means that although two users could get different experiences, neither are getting an experience that feels broken.

Another default that we could set is the maxlength on the element itself rather than afterwards with JavaScript. Without doing this, the baseline version (the one without JS) would be able to type past the limit.

User Testing

It’s all very well testing on various browsers and thinking about the various permutations of how devices could serve the website in a different way, but are users able to use it?

Generally speaking, no. I’m consistently shocked by user testing; people never use a site how you expect them to. This means that user testing is crucial.

It’s quite hard to simulate a user test session in an article, so for the purposes of this article, I’m going to just focus on one point that I’ve seen users struggle with on various projects.

The user is happily writing away, gets to 0 characters remaining, and then gets stuck. They forget what they were writing, or they don’t notice that it had stopped typing.

This happens because there is nothing telling the user that something has changed; if they are typing away without paying much attention, then they can hit the maximum length without noticing. This is a frustrating experience.

One way to solve this issue is to allow overtyping, so the maximum length still counts for it to be valid when submitted but it allows the user to type as much as they want and then edit it before submission. This is a good solution as it gives the control back to the user.

Okay, so how do we implement overtyping? Instead of jumping into the code, let’s step through in theory. maxlength doesn’t allow overtyping, it just stops allowing input once it hits the limit. So we need to remove maxlength and write a JS equivalent. We can use the input event handler as we did before, as we know that works on paste, etc. So in that event, the handler would check if the user has typed more than the limit, and if so, the counter text could change to say “10 characters too many.” The baseline version (without the JS) would no longer have a limit at all, so a useful middle ground could be to add the maxlength to the element in the HTML and remove the attribute using JavaScript.

That way, the user would see that they are over the limit without being cut off while typing. There would still need to be validation to make sure it isn’t submitted, but that is worth the extra small bit of work to make the user experience far better.


An example showing “17 characters too many” in red text next to a <code>&lt;textarea&gt;</code>.


Allowing the user to overtype (Large preview)

Designing The Overtype

This gets us to quite a solid position: the user is now able to use any device and get a decent experience. If they type too much it is not going to cut them off; instead, it will just allow it and encourage them to edit it down.

There’s a variety of ways this could be designed differently, so let’s look at how Twitter handles it:


A screenshot from Twitter showing their textarea with overtyped text with a red background.


Twitter’s <textarea> (Large preview)

Twitter has been iterating its main tweet <textarea> since they started the company. The current version uses a lot of techniques that we could consider using.

As you type on Twitter, there is a circle that completes once you get to the character limit of 280. Interestingly, it doesn’t say how many characters are available until you are 20 characters away from the limit. At that point, the incomplete circle turns orange. Once you have 0 characters remaining, it turns red. After the 0 characters, the countdown goes negative; it doesn’t appear to have a limit on how far you can overtype (I tried as far as 4,000 characters remaining) but the tweet button is disabled while overtyping.

So this works the same way as our <textarea> does, with the main difference being the characters represented by a circle that updates and shows the number of characters remaining after 260 characters. We could implement this by removing the text and replacing it with an SVG circle.

The other thing that Twitter does is add a red background behind the overtyped text. This makes it completely obvious that the user is going to need to edit or remove some of the text to publish the tweet. It is a really nice part of the design. So how would we implement that? We would start again from the beginning.

You remember the part where we realized that a basic input text box would not give us multiline? And that a maxlength attribute would not give us the ability to overtype? This is one of those cases. As far as I know, there is nothing in CSS that gives us the ability to style parts of the text inside a <textarea>. This is the point where some people would suggest web components, as what we would need is a pretend <textarea>. We would need some kind of element — probably a div — with contenteditable on it and in JS we would need to wrap the overtyped text in a span that is styled with CSS.

What would the baseline non-JS version look like then? Well, it wouldn’t work at all because while contenteditable will work without JS, we would have no way to actually do anything with it. So we would need to have a <textarea> by default and remove that if JS is available. We would also need to do a lot of accessibility testing because while we can trust a <textarea> to be accessible relying on browser features is a much safer bet than building your own components. How does Twitter handle it? You may have seen it; if you are on a train and your JavaScript doesn’t load while going into a tunnel then you get chucked into a decade-old legacy version of Twitter where there is no character limit at all.

What happens then if you tweet over the character limit? Twitter reloads the page with an error message saying “Your Tweet was over the character limit. You’ll have to be more clever.” No, Twitter. You need to be more clever.

Retro

The only way to conclude this dramatization is a retrospective. What went well? What did we learn? What would we do differently next time or what would we change completely?

We started very simple with a basic textbox; in some ways, this is good because it can be all too easy to overcomplicate things from the beginning and an MVP approach is good. However, as time went on, we realized how important it is to have some critical thinking and to consider what we are doing. We should have known a basic textbox wouldn’t be enough and that a way of setting a maximum length would be useful. It is even possible that if we have conducted or sat in on user research sessions in the past that we could have anticipated the need to allow overtyping. As for the browser compatibility and user experiences across devices, considering progressive enhancement from the beginning would have caught most of those potential issues.

So one change we could make is to be much more proactive about the thinking process instead of jumping straight into the task, thinking that the code is easy when actually the code is the least important part.

On a similar vein to that, we had the “scope creep” of maxlength, and while we could possibly have anticipated that, we would rather not have any scope creep at all. So everybody involved from the beginning would be very useful, as a diverse multidisciplinary approach to even small tasks like this can seriously reduce the time it takes to figure out and fix all the unexpected tweaks.

Back To The Real World

Okay, so I can get quite deep into this made-up project, but I think it demonstrates well how complicated the most seemingly simple tasks can be. Being user-focussed, having a progressive enhancement mindset, and thinking things through from the beginning can have a real impact on both the speed and quality of delivery. And I didn’t even mention testing!

I went into some detail about the history of the <textarea> and which event listeners to use, some of this can seem overkill, but I find it fascinating to gain a real understanding of the subtleties of the web, and it can often help demystify issues we will face in the future.

Smashing Editorial
(ra, il)


See the original post – 

Designing A Textbox, Unabridged

Thumbnail

Get Your Mobile Site Ready For The 2018 Holiday Season




Get Your Mobile Site Ready For The 2018 Holiday Season

Suzanne Scacca



After reading the title of this article, it might seem like it’s jumping the gun, but with retailers turning on holiday music and putting out holiday-related displays earlier and earlier every year, your consumers are primed to start thinking about the holidays earlier, too. In fact, a study done by the Tampa Bay Times revealed that in-store shoppers were exposed to holiday music as early as October 22 in 2017.


Holiday music in retail


Results from TBT’s survey on when holiday music starts (Source: Tampa Bay Times) (Large preview)

Of course, e-commerce handles the holiday season a bit differently than brick-and-mortar. It’s not really necessary to announce promotions or run sales in late October or early November. However, that doesn’t mean you should wait until the last minute to prepare your mobile website for the holidays.

In this article, I’m going to give you a quick rundown of what happened during the 2017 holiday sales season and, in particular, what role mobile played in it. Then, we’re going to dig into holiday design and marketing tactics you can use to boost sales through your mobile website for the 2018 holiday season.

Recommended reading: How Mobile Web Design Affects Local Search (And What To Do About It)

A Recap Of The 2017 Holiday Sales Season

Before we get started, I want to quickly add a disclaimer:

This particular section focuses on e-commerce statistics because this kind of data is readily available. Something like the total number of page visits, subscribed readers, and leads generated… well, it’s not.

So, although I only use data to express how important mobile was to 2017 holiday sales, keep in mind that the tips that follow pertain to all websites. Even if your site doesn’t expressly sell goods or services, blogs and other content-driven sites can take advantage of this, too!

Now, let’s take a look at the numbers:

Total Retail Sales

The National Retail Federation calculated the total amount of retail sales–online and in-store–to be $691.9 billion between November and December, a 5.5% bump from 2016.

Total e-Commerce Sales

Adobe put the total amount of e-commerce sales during that same timeframe at $108.15 billion in 2017.


2017 holiday e-commerce revenue


Adobe’s stats on 2016 and 2017 holiday e-commerce revenue (Source: Adobe) (Large preview)

e-Commerce Sales By Device

Adobe takes it even further and breaks down the share of revenue by device:


Device-specific sales during 2017 holidays


Breakdown of desktop, smartphone and tablet sales for 2017 holiday season (Source: Adobe) (Large preview)

e-Commerce Sales vs. Traffic

While smartphone and tablet sales still trail those on desktop, there are a couple interesting things to note here. For starters, desktop revenue has mostly flatlined year-over-year whereas mobile continues to grow. In addition, there’s an interesting disparity between how much traffic comes from each device and what percentage of revenue it generates:


Traffic vs. revenue breakdown


Traffic vs. revenue for desktop, smartphone, tablet (Source: Adobe) (Large preview)

Pay close attention to desktop and smartphone. As you can see, more visits stem from smartphones than any other device and, yet, desktop leads the way in conversions:


Conversion rates by device


Statista shows the breakdown between desktop, smartphone, and tablet conversions in Q1 2018 (Source: Statista) (Large preview)

Is this indicative of a lack of trust in smart devices to handle purchases?

In all likelihood, it probably isn’t. Data from other sources indicates that on holidays, in particular, mobile reigns supreme in terms of visits and conversions:

  • Thanksgiving Day: 62% of traffic / 46% of purchases.
  • Christmas Day: 68% of traffic / 50% of purchases.

Also, let’s not forget to take into account the strengths of mobile devices within the shopper’s experience. According to the four micro-moments as defined by Google, a large number of mobile users commonly search for the following:

  • “I want to know.”
  • “I want to go.”
  • “I want to do.”
  • “I want to buy.”

The second and third are clearly indicative of a searcher’s desire to find something outside their devices (and their homes) to spend money on. That might even be so for the fourth, though it could also be an indication that they want to do their research on mobile and complete the purchase on desktop.

Either way, we know that smartphones tend to be a primary facilitator in the customer’s journey and not something that’s putting an end to the shopping experience as a whole.

Recommended reading: Designing For Micro-Moments

5 Tips To Prepare Your Mobile Site For The 2018 Holiday Season

While the overall numbers indicate that desktop is the leading platform for holiday sales, it’s not a universal rule that can be applied to each and every day in November and December. This is why your own data will have to play a big role in the design choices you make for your mobile site this season.

You have to admit, no matter how stressed or unhappy you might feel around the holidays, there is something nice about encountering just the right hint of holiday “cheer”. And that’s one of the keys to doing this right: finding the right amount of holiday flavor to infuse into your website.

Before we get into what you can do to spruce up your mobile web design, I want to remind you that security and speed are critical elements to check off your list before November gets here. These might not be in your realm of responsibilities, but that doesn’t mean you shouldn’t keep an eye on them.

If you’re doing all this design work in anticipation of boosting conversions over the holidays, don’t let it all be for nothing by forgetting about performance and security essentials. To protect your site from potentially harmful traffic surges, start with this front-end performance checklist. With regards to security, you can use these security improvement tips.

Now, let’s talk about the five ways in which you can prepare your mobile website for the 2018 holiday season:

1. Study Last Year’s Data

If your website has been live and actively doing business for more than a year, you need to start with the data from 2017. Using Google Analytics and your CRM platform, locate answers to the following questions:

What was the prominent device that generated traffic? Sales?

Google Analytics allows you to divvy up traffic based on technology in a number of ways:

Under Browser & OS, you can sort visitors by browser:


Google Analytics browser data


Google Analytics shows which browsers users visited from (Source: Google Analytics) (Large preview)

There is a small tab at the top of the table for “Operating System”. Click that to reveal which OS were used:


Google Analytics operating system data


Google Analytics breaks down traffic by operating system (Source: Google Analytics) (Large preview)

You can use the MobileOverview tab to look at the simple breakdown between desktop, mobile, and tablet users.


Google Analytics device data


Google Analytics division between device traffic (Source: Google Analytics) (Large preview)

Really, your goal here is to weed out desktop users so you can focus strictly on mobile traffic as you assess the following data points.

When did your site experience an increase in traffic in November or December?

Every website’s holiday traffic history will look a little different. Take mine, for example:


A sample Google Analytics holiday traffic chart


An example of holiday traffic up and downs in Google Analytics (Source: Google Analytics) (Large preview)

My business really isn’t affected by the holidays at all… except that I know things are going to be super quiet on and around Thanksgiving and the major holidays in December. This is still important information for me to have.

For businesses that directly sell products or services through their site or content-based sites that plan publication schedules based on traffic, you’ll likely see a different trajectory in terms of highs and lows.

When did sales start to increase (if they don’t coincide with traffic)?

Again, for some of you, the matter of sales is irrelevant if you don’t offer any through your site. For everyone else, however, use the Google Analytics Conversions tab along with sales logged through your payment gateway or CRM to check this number.

Just remember that you have to activate the Conversions module in Google Analytics if you want it to track that data. If you didn’t remember last year, put it in place for this year.

Did the holiday uptick remain consistent until the end of the season or were there temporary dropoffs?

Much of this has to do with how you promote holiday-related events, promotional offers or content through your website. If you consistently market around the holidays from November 1 to the end of the year, you should see relatively steady traffic and sales.

Some days, of course, may be slower than others (like during workdays or earlier in the season), so it’s good to get a sense for the ebb and flow of your site’s holiday traffic. On the other hand, your website might be a major draw only on special sales days and the holidays themselves, so you can use this data to harness your energy for a big push on the days when it’ll have the greatest impact.

Try to identify patterns, so you can plan your design and marketing strategy accordingly.

When did traffic and sales return to their usual amount?

At some point, your site is going to see a dip in activity. There are some businesses that embrace this.

Let’s use Xfinity as an example. Around mid-November of last year, this is the holiday-centric message the top of the home page was pushing:


Xfinity holiday promotion


Xfinity promotes ways to make your home holiday ready (Source: Xfinity) (Large preview)

A month later, on December 9, any mention of the holidays was gone and replaced by a promotion of the upcoming Olympic Winter Games.


Xfinity December promotion


Xfinity stops promoting holidays in December (Source: Xfinity) (Large preview)

One can only assume that a major sporting event like the Olympics helps Xfinity sign more subscribers than trying to capture last-minute sales for the holidays.

Logically, this makes sense. December is a busy time for families. They’re planning travel, purchasing gifts and running around town in preparation for the upcoming celebrations. Most people probably don’t have time to set up a new cable or Internet package and wait around for Xfinity to configure it then.

Bottom line: it’s okay if your holiday-related traffic and sales drop off earlier than December 31. Study your data and let your user behavior guide you in your mobile design and promotion strategy.

What were the most popular sources for mobile traffic?

It’s actually not enough to identify the most popular sources of mobile traffic for your site. Sure, you want to know if organic SEO and social media promotional efforts worked to bring traffic to it… but it won’t really matter if those visitors abandoned the site without taking action.

When you start digging through the ways in which you acquired mobile visitors, make sure to review the sources and keywords used against other telling metrics, like:

  • Bounce rate
  • Time on site
  • Pages visited

This will give you a good sense for what sources — e.g. keywords, PPC ads, social media content, promotional backlinks from other sites — that attracted high-quality leads to it during the holiday season.

What were the most/least successful promotions?

One more thing to look at is what exactly performed the best between November and December with mobile visitors.

Did you run a pop-up promoting free shipping that was dismissed by most mobile visitors, but greatly taken advantage of by those on desktop? Did your custom home page banner touting an upcoming Black Friday sale get more clicks than the home page banner otherwise does at other times of the year? And what pathway resulted in the most conversions?

Dig into what exactly it was that appealed to your mobile visitors. Then, as you work on this year’s plan, focus on reproducing that success.

2. Assess The Navigation

The navigation plays two important roles on a website:

  1. High-level tabs inform visitors on what they’ll find on the site; essentially answering the question, “Is this of relevance to me?”
  2. The navigation itself provides visitors with shortcuts to parts of the site that matter most to them, simplifying their pathway to conversion.

When reviewing your navigation in the context of holiday traffic, you must ensure that it fulfills both of these roles.

Let’s look at two websites that provide relevant links during the holidays while also streamlining the visitors’ journey from entry to holiday-related pages.

Food52 is an online hub for people who enjoy cooking. You can buy kitchen gadgets from the site and peruse a whole bunch of content related to food and cooking.

I want to call out a number of things Food52 does especially well in terms of navigation:


Thanksgiving categories on Food52


The Food52 home page includes Thanksgiving-related categories (Source: Food52) (Large preview)

  1. The hamburger menu is prominently displayed in the top-left, which is exactly where visitors’ eyes will go as they follow the Z-shaped pattern for reading.
  2. The shopping cart, search bar shortcut and profile link are also displayed in the top header, making it easy to navigate to elements that support the shopping experience.
  3. If you scroll down on the home page (as I’ve done in the screenshot above), Food52 includes a good mix of Thanksgiving-related content along with its standard fare. In addition, it includes categories that help users filter through content that’s most relevant to them.

One other thing I’d like to point out is the navigation itself:


Simplified mobile navigation from Food52


Simplified and customized navigation from Food52 for Thanksgiving (Source: Food52) (Large preview)

There are a number of things you’ll notice:

  • The mobile navigation is quite simplified. Despite how many categories and types of pages the site has, the navigation keeps this from being an overwhelming choice.
  • There are special tabs for Thanksgiving and Holiday. This will get users directly to content related to the holiday they’re cooking for.
  • The Hotline — which is its customer service forum — is also featured in the mobile navigation. This element is especially important around the holidays when visitors have questions they need answered quickly.

L.L.Bean is another website that handles mobile navigation well.


L.L.Bean Navigation


L.L.Bean puts the essentials in the navigation (Source: L.L.Bean) (Large preview)

As you can see, there are four buttons located within the mobile header:

  • Hamburger navigation icon: bolded and well-placed;
  • L.L.Bean logo for easy backtracking to the home page;
  • A shopping cart icon which will keep stored items top-of-mind with mobile users;
  • An ever-present search bar to speed up navigation even further.

Once a mobile user expands the hamburger navigation, they encounter this:


L.L.Bean hamburger navigation


L.L.Bean prioritizes customer service and gifts around the holidays (Source: L.L.Bean) (Large preview)

As you can see, “Call Us” is the first option available within the mobile navigation. Again, with people in a rush and trying to get purchases done right over the holidays, having a direct line of communication to the company is important. The account link and “Ship To” personalization are also nice touches as these icons keep conversion top-of-mind.

Now, looking down the navigation, you’ll see this is a pretty standard mega menu. However, take note that at the very top of this category (as is the case for all others) appears a page for “Gifts”. This is not something you see the rest of the year, so that’s another holiday-related touch meant to streamline searches and sales.

3. Use Add-ons At Checkout

Here is everything you need to know to optimize conversions at mobile checkout. If I can add an additional two cents to this matter, though, I’d like to briefly talk about add-ons at checkout… but only around the holidays.

Typically, I believe that a fully streamlined checkout process is essential to capturing as many conversions as possible on mobile devices. It’s hard enough typing out all that information (if it doesn’t auto-populate) and trusting that devices and websites will keep payment information secure.

However…

When it comes to designing the checkout for holiday shoppers, I think it’s at least worth experimenting with add-ons. For example:

  • Promo codes
  • Free delivery options
  • Shorter, but more premium delivery or pick up in store options
  • Gift wrapping.

Nordstrom doesn’t even wait for visitors to get to the checkout to promote this.


Nordstrom free shipping


Nordstrom promotes free shipping and returns right away (Source: Nordstrom) (Large preview)

The very top of the site has a sticky bar promoting the free shipping and returns offer. This way, visitors are already in the mindset that they can get their Black Friday purchases or holiday gifts for even cheaper than planned.

Fitbit has another example of this I really like:


Fitbit holiday promotions


Fitbit promotes sales and free expedited shipping (Source: Fitbit) (Large preview)

The top-half of the Fitbit homepage gets visitors into the mindset that there are cost savings galore here. Not only are items on sale, but certain orders come with free and expedited shipping. And the site clearly states when the sale ends, which will keep customers from getting upset if gifts don’t arrive on time. (It will also probably motivate them to get their shopping done sooner if they want to cash in on the sale.)

So all appropriate expectations regarding pricing and shipping are set right from the very get-go, making checkout go more smoothly.

I know that some may argue these will be bad for UX (and normally I’d join them), but I don’t see them as distractions during the holidays. This is an expensive and busy time of year.

Anything you can add to checkout that says, “Hey, we’re thinking about you and want to make this holiday season go just a little more smoothly” would go over well with your users.

4. Give Images A Seasonal Touch

Images are a tricky thing this time of year. You want to use them to appeal to holiday-minded visitors, but you don’t want to overdo it because images add a lot of pressure to your server. You need your site running fast, so be smart about what you do with them.

  1. Resize them before you ever add them to your site. There’s no need to use oversized images if they’re going to appear smaller online.
  2. Optimize your images with compression tools before and after they’re added to the design. This will free up some space they would otherwise take.
  3. If your users’ journey starts above-the-fold, you might want to consider lazy-loading images.

That said, images can go a long way in communicating to visitors that your site and business are ready to spread some holiday cheer without having to ever explicitly say it. This might be the ideal choice for those of you who design websites for global audiences. Perhaps you’d rather use an image that evokes a festive feeling because you don’t want to unintentionally offend anyone who doesn’t celebrate the holiday your copy calls express attention to.

Here is a great example from Uncommon Goods:


Uncommon Goods holiday home page


Uncommon Goods holiday home page (Source: Uncommon Goods) (Large preview)

I wouldn’t necessarily say the images used here are festive, but there are unique elements that evoke a certain association with the holidays. Like the color green used within the photos. Or the partial glances of what appear to be snow globes. They’re seasonal elements, but not necessarily relegated to Christmas, Hanukkah or Kwanzaa.

Then, there’s the United States Postal Service (USPS) website. Granted, this website targets visitors within the United States, but it remains mindful of the differences in religions practiced and holidays celebrated.


USPS festive image


USPS uses a non-denominational image to promote the holidays (Source: USPS) (Large preview)

The message remains neutral as does the image itself. The USPS is simply trying to help people quickly and festively send holiday cards, gifts and other items to distant relatives and friends.

5. Review The Customer Journey

The factor of speed is a big one when it comes to designing the customer journey. While the navigation cuts down on any unnecessary steps that might be taken when visitors can afford a more leisurely pace, your design should expedite the rest.

In other words:

  • Start talking about holiday-related content, products, pages and links right on the home page.
  • Make sure you have at least one mention above-the-fold, whether it’s in the navigation, in a blog link or in a seasonal promo.
  • Use the data from last year to streamline the ideal pathway from the home page to conversion.
  • Walk through that pathway as a visitor on both desktop and mobile. Is it as clear, concise and direct as possible?
  • Check the responsiveness of the pathway. Your site, in general, needs to be responsive, but if you’re optimizing a certain journey for visitors and you want them to convert on mobile, then extra care needs to be taken.

Below is another example from the Food52 website from the holidays. As you can see in this snippet, two kinds of holiday-related content are promoted. What’s cool about them, though, is that it’s not necessarily in-your-face.


Food52’s festive home page design


Food52 adds a holiday touch to its home page design and copy (Source: Food52) (Large preview)

The relish recipe could easily be used any time of the year. However, because pomegranates are often considered a winter food, this falls into the category of holiday-related content. The second post is more blatant about attracting holiday readers.

The final element in this screenshot is also worth taking note of. To start, it appears they’ve customized the copy specifically for this time of year. All it takes is one addition of the word “joyfully” to let visitors know that Food52 took time to make its site just a little more festive.

I also want to give them kudos for including a newsletter subscription box here and in other key areas of the site.

If the research from Adobe is right and only about half of mobile visitors convert, then this is a smart design choice. This way, Food52 can collect visitor information on mobile and contact them later. When interested visitors receive the reminder at a more convenient time and place, they can hop onto their desktop or other preferred device and finish the conversion process.

Another site which I think handles the customer journey optimization well is Cracker Barrel.


Cracker Barrel home page design


Cracker Barrel home page design (Source: Cracker Barrel) (Large preview)

Cracker Barrel doesn’t overdo it when it comes to designing for the holidays. Instead, it’s developed a series of calls-to-action that set certain types of visitors on the right path.

The first one features an image of what looks like a holiday feast with the CTA “Order Heat N’ Serve”. That’s brilliant. If people are taking the time to visit this site right before Thanksgiving, it’s probably to see if they can get help preparing their major feast… which it appears they can.

The second section sort of looks festive, though I’d still say they play it safe with choice of color, texture and gift card image. With a CTA of “Buy Gift Cards”, they’re now appealing to holiday shoppers. Not only can you get a whole feast conveniently prepared by Cracker Barrel, but you can buy gifts here, too.

Sometimes designing for the holidays isn’t about the blatant use of snowflake imagery or promoting recipes for cooking a turkey. Sometimes it’s about understanding what your users’ particular needs are at that time and helping setting them on that exact journey right away.

Wrap-Up

I understand that there are ways to add a dancing Santa to a site or to spruce up pop-ups with animated text and images, but I think subtler is better.

It’s kind of like the whole holiday music and decorations thing. How many times have you gone to your local drug store at the end of October for the purposes of getting Halloween candy, only to be met by an entire aisle full of holiday decorations? Or maybe you entered a department store like Macy’s in November, thinking you’ll beat the crazy holiday crowds. And, yet, holiday music is already playing. It’s overkill.

If you want to impress mobile visitors with your website around the holidays, focus on making this a worthwhile experience. Optimize your server for high volumes of traffic, put extra security in place, reorganize the navigation and add some small festive touches to your design that call attention to the most relevant parts of your site at this time of year.

Smashing Editorial
(ra, yk, il)


Excerpt from: 

Get Your Mobile Site Ready For The 2018 Holiday Season

Thumbnail

How to Optimize Your Website for SEO and Conversions

optimize-website-seo-conversions-introduction

Have you learned how to optimize your website for both SEO and conversions? If not, your website isn’t working as hard as it should. SEO and conversions might exist in separate parts of the marketing sector, but they’re inextricably linked. If you have good SEO, you can attract more traffic and get more opportunities to convert potential customers. A website optimized for conversions typically has better metrics, such as time on page and bounce rate, which means that Google might rank it higher. The following tips and strategies will teach you how to optimize your website for both SEO and…

The post How to Optimize Your Website for SEO and Conversions appeared first on The Daily Egg.

Read this article – 

How to Optimize Your Website for SEO and Conversions

Thumbnail

Building A Room Detector For IoT Devices On Mac OS




Building A Room Detector For IoT Devices On Mac OS

Alvin Wan



Knowing which room you’re in enables various IoT applications — from turning on the light to changing TV channels. So, how can we detect the moment you and your phone are in the kitchen, or bedroom, or living room? With today’s commodity hardware, there are a myriad of possibilities:

One solution is to equip each room with a bluetooth device. Once your phone is within range of a bluetooth device, your phone will know which room it is, based on the bluetooth device. However, maintaining an array of Bluetooth devices is significant overhead — from replacing batteries to replacing dysfunctional devices. Additionally, proximity to the Bluetooth device is not always the answer: if you’re in the living room, by the wall shared with the kitchen, your kitchen appliances should not start churning out food.

Another, albeit impractical, solution is to use GPS. However, keep in mind hat GPS works poorly indoors in which the multitude of walls, other signals, and other obstacles wreak havoc on GPS’s precision.

Our approach instead is to leverage all in-range WiFi networks — even the ones your phone is not connected to. Here is how: consider the strength of WiFi A in the kitchen; say it is 5. Since there is a wall between the kitchen and the bedroom, we can reasonably expect the strength of WiFi A in the bedroom to differ; say it is 2. We can exploit this difference to predict which room we’re in. What’s more: WiFi network B from our neighbor can only be detected from the living room but is effectively invisible from the kitchen. That makes prediction even easier. In sum, the list of all in-range WiFi gives us plentiful information.

This method has the distinct advantages of:

  1. not requiring more hardware;
  2. relying on more stable signals like WiFi;
  3. working well where other techniques such as GPS are weak.

The more walls the better, as the more disparate the WiFi network strengths, the easier the rooms are to classify. You will build a simple desktop app that collects data, learns from the data, and predicts which room you’re in at any given time.

Further Reading on SmashingMag:

Prerequisites

For this tutorial, you will need a Mac OSX. Whereas the code can apply to any platform, we will only provide dependency installation instructions for Mac.

Step 0: Setup Work Environment

Your desktop app will be written in NodeJS. However, to leverage more efficient computational libraries like numpy, the training and prediction code will be written in Python. To start, we will setup your environments and install dependencies. Create a new directory to house your project.

mkdir ~/riot

Navigate into the directory.

cd ~/riot

Use pip to install Python’s default virtual environment manager.

sudo pip install virtualenv

Create a Python3.6 virtual environment named riot.

virtualenv riot --python=python3.6

Activate the virtual environment.

source riot/bin/activate

Your prompt is now preceded by (riot). This indicates we have successfully entered the virtual environment. Install the following packages using pip:

  • numpy: An efficient, linear algebra library
  • scipy: A scientific computing library that implements popular machine learning models
pip install numpy==1.14.3 scipy
==1.1.0

With the working directory setup, we will start with a desktop app that records all WiFi networks in-range. These recordings will constitute training data for your machine learning model. Once we have data on hand, you will write a least squares classifier, trained on the WiFi signals collected earlier. Finally, we will use the least squares model to predict the room you’re in, based on the WiFi networks in range.

Step 1: Initial Desktop Application

In this step, we will create a new desktop application using Electron JS. To begin, we will instead the Node package manager npm and a download utility wget.

brew install npm wget

To begin, we will create a new Node project.

npm init

This prompts you for the package name and then the version number. Hit ENTER to accept the default name of riot and default version of 1.0.0.

package name: (riot)
version: (1.0.0)

This prompts you for a project description. Add any non-empty description you would like. Below, the description is room detector

description: room detector

This prompts you for the entry point, or the main file to run the project from. Enter app.js.

entry point: (index.js) app.js

This prompts you for the test command and git repository. Hit ENTER to skip these fields for now.

test command:
git repository:

This prompts you for keywords and author. Fill in any values you would like. Below, we use iot, wifi for keywords and use John Doe for the author.

keywords: iot,wifi
author: John Doe

This prompts you for the license. Hit ENTER to accept the default value of ISC.

license: (ISC)

At this point, npm will prompt you with a summary of information so far. Your output should be similar to the following.


  "name": "riot",
  "version": "1.0.0",
  "description": "room detector",
  "main": "app.js",
  "scripts": 
    "test": "echo "Error: no test specified" && exit 1"
  ,
  "keywords": [
    "iot",
    "wifi"
  ],
  "author": "John Doe",
  "license": "ISC"
}

Hit ENTER to accept. npm then produces a package.json. List all files to double-check.

ls

This will output the only file in this directory, along with the virtual environment folder.

package.json
riot

Install NodeJS dependencies for our project.

npm install electron --global  # makes electron binary accessible globally
npm install node-wifi --save

Start with main.js from Electron Quick Start, by downloading the file, using the below. The following -O argument renames main.js to app.js.

wget https://raw.githubusercontent.com/electron/electron-quick-start/master/main.js -O app.js

Open app.js in nano or your favorite text editor.

nano app.js

On line 12, change index.html to static/index.html, as we will create a directory static to contain all HTML templates.

function createWindow () 
  // Create the browser window.
  win = new BrowserWindow(width: 1200, height: 800)

  // and load the index.html of the app.
  win.loadFile('static/index.html')

  // Open the DevTools.

Save your changes and exit the editor. Your file should match the source code of the app.js file. Now create a new directory to house our HTML templates.

mkdir static

Download a stylesheet created for this project.

wget https://raw.githubusercontent.com/alvinwan/riot/master/static/style.css?token=AB-ObfDtD46ANlqrObDanckTQJ2Q1Pyuks5bf79PwA%3D%3D -O static/style.css

Open static/index.html in nano or your favorite text editor. Start with the standard HTML structure.

<!DOCTYPE html>
  <html>
    <head>
      <meta charset="UTF-8">
      <title>Riot | Room Detector</title>
    </head>
    <body>
      <main>
      </main>
    </body>
  </html>

Right after the title, link the Montserrat font linked by Google Fonts and stylesheet.

<title>Riot | Room Detector</title>
  <!-- start new code -->
  <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet">
  <link href="style.css" rel="stylesheet">
  <!-- end new code -->
</head>

Between the main tags, add a slot for the predicted room name.

<main>
  <!-- start new code -->
  <p class="text">I believe you’re in the</p>
  <h1 class="title" id="predicted-room-name">(I dunno)</h1>
  <!-- end new code -->
</main>

Your script should now match the following exactly. Exit the editor.

<!DOCTYPE html>
  <html>
    <head>
      <meta charset="UTF-8">
      <title>Riot | Room Detector</title>
      <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet">
      <link href="style.css" rel="stylesheet">
    </head>
    <body>
      <main>
        <p class="text">I believe you’re in the</p>
        <h1 class="title" id="predicted-room-name">(I dunno)</h1>
      </main>
    </body>
  </html>

Now, amend the package file to contain a start command.

nano package.json

Right after line 7, add a start command that’s aliased to electron .. Make sure to add a comma to the end of the previous line.

"scripts": 
  "test": "echo "Error: no test specified" && exit 1",
  "start": "electron ."
,

Save and exit. You are now ready to launch your desktop app in Electron JS. Use npm to launch your application.

npm start

Your desktop application should match the following.


home page with button


Home page with “Add New Room” button available (Large preview)

This completes your starting desktop app. To exit, navigate back to your terminal and CTRL+C. In the next step, we will record wifi networks, and make the recording utility accessible through the desktop application UI.

Step 2: Record WiFi Networks

In this step, you will write a NodeJS script that records the strength and frequency of all in-range wifi networks. Create a directory for your scripts.

mkdir scripts

Open scripts/observe.js in nano or your favorite text editor.

nano scripts/observe.js

Import a NodeJS wifi utility and the filesystem object.

var wifi = require('node-wifi');
var fs = require('fs');

Define a record function that accepts a completion handler.

/**
 * Uses a recursive function for repeated scans, since scans are asynchronous.
 */
function record(n, completion, hook) 

Inside the new function, initialize the wifi utility. Set iface to null to initialize to a random wifi interface, as this value is currently irrelevant.

function record(n, completion, hook) 
    wifi.init(
        iface : null
    );
}

Define an array to contain your samples. Samples are training data we will use for our model. The samples in this particular tutorial are lists of in-range wifi networks and their associated strengths, frequencies, names etc.

function record(n, completion, hook) 
    ...
    samples = []

Define a recursive function startScan, which will asynchronously initiate wifi scans. Upon completion, the asynchronous wifi scan will then recursively invoke startScan.

function record(n, completion, hook) 
  ...
  function startScan(i) 
    wifi.scan(function(err, networks) 
    );
  }
  startScan(n);
}

In the wifi.scan callback, check for errors or empty lists of networks and restart the scan if so.

wifi.scan(function(err, networks) 
  if (err 
});

Add the recursive function’s base case, which invokes the completion handler.

wifi.scan(function(err, networks) 
  ...
  if (i <= 0) 
    return completion(samples: samples);
  }
});

Output a progress update, append to the list of samples, and make the recursive call.

wifi.scan(function(err, networks) 
  ...
  hook(n-i+1, networks);
  samples.push(networks);
  startScan(i-1);
);

At the end of your file, invoke the record function with a callback that saves samples to a file on disk.

function record(completion) 
  ...


function cli() 
  record(1, function(data) 
    fs.writeFile('samples.json', JSON.stringify(data), 'utf8', function() );
  }, function(i, networks) 
    console.log(" * [INFO] Collected sample " + (21-i) + " with " + networks.length + " networks");
  )
}

cli();

Double check that your file matches the following:

var wifi = require('node-wifi');
var fs = require('fs');

/**
 * Uses a recursive function for repeated scans, since scans are asynchronous.
 */
function record(n, completion, hook) 
  wifi.init(
      iface : null // network interface, choose a random wifi interface if set to null
  );

  samples = []
  function startScan(i) 
    wifi.scan(function(err, networks) 
        if (err 
        if (i <= 0) 
          return completion(samples: samples);
        }
        hook(n-i+1, networks);
        samples.push(networks);
        startScan(i-1);
    });
  }

  startScan(n);
}

function cli() 
    record(1, function(data) 
        fs.writeFile('samples.json', JSON.stringify(data), 'utf8', function() );
    }, function(i, networks) 
        console.log(" * [INFO] Collected sample " + i + " with " + networks.length + " networks");
    )
}

cli();

Save and exit. Run the script.

node scripts/observe.js

Your output will match the following, with variable numbers of networks.

 * [INFO] Collected sample 1 with 39 networks

Examine the samples that were just collected. Pipe to json_pp to pretty print the JSON and pipe to head to view the first 16 lines.

cat samples.json | json_pp | head -16

The below is example output for a 2.4 GHz network.


  "samples": [
    [
      
        "mac": "64:0f:28:79:9a:29",
        "bssid": "64:0f:28:79:9a:29",
        "ssid": "SMASHINGMAGAZINEROCKS",
         "channel": 4,
         "frequency": 2427,
          "signal_level": "-91",
          "security": "WPA WPA2",
          "security_flags": [
           "(PSK/AES,TKIP/TKIP)",
          "(PSK/AES,TKIP/TKIP)"
        ]
      ,

This concludes your NodeJS wifi-scanning script. This allows us to view all in-range WiFi networks. In the next step, you will make this script accessible from the desktop app.

Step 3: Connect Scan Script To Desktop App

In this step, you will first add a button to the desktop app to trigger the script with. Then, you will update the desktop app UI with the script’s progress.

Open static/index.html.

nano static/index.html

Insert the “Add” button, as shown below.

<h1 class="title" id="predicted-room-name">(I dunno)</h1>
        <!-- start new code -->
        <div class="buttons">
            <a href="add.html" class="button">Add new room</a>
        </div>
        <!-- end new code -->
    </main>

Save and exit. Open static/add.html.

nano static/add.html

Paste the following content.

<!DOCTYPE html>
  <html>
    <head>
      <meta charset="UTF-8">
      <title>Riot | Add New Room</title>
      <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet">
      <link href="style.css" rel="stylesheet">
    </head>
    <body>
      <main>
        <h1 class="title" id="add-title">0</h1>
        <p class="subtitle">of <span>20</span> samples needed. Feel free to move around the room.</p>
        <input type="text" id="add-room-name" class="text-field" placeholder="(room name)">
        <div class="buttons">
          <a href="#" id="start-recording" class="button">Start recording</a>
          <a href="index.html" class="button light">Cancel</a>
        </div>
        <p class="text" id="add-status" style="display:none"></p>
      </main>
      <script>
        require('../scripts/observe.js')
      </script>
    </body>
  </html>

Save and exit. Reopen scripts/observe.js.

nano scripts/observe.js

Beneath the cli function, define a new ui function.

function cli() 
    ...


// start new code
function ui() 

// end new code

cli();

Update the desktop app status to indicate the function has started running.

function ui() 
  var room_name = document.querySelector('#add-room-name').value;
  var status = document.querySelector('#add-status');
  var number = document.querySelector('#add-title');
  status.style.display = "block"
  status.innerHTML = "Listening for wifi..."

Partition the data into training and validation data sets.

function ui() 
  ...
  function completion(data) 
    train_data = samples: data['samples'].slice(0, 15)
    test_data = samples: data['samples'].slice(15)
    var train_json = JSON.stringify(train_data);
    var test_json = JSON.stringify(test_data);
  }
}

Still within the completion callback, write both datasets to disk.

function ui() 
  ...
  function completion(data) 
    ...
    fs.writeFile('data/' + room_name + '_train.json', train_json, 'utf8', function() );
    fs.writeFile('data/' + room_name + '_test.json', test_json, 'utf8', function() {});
    console.log(" * [INFO] Done")
    status.innerHTML = "Done."
  }
}

Invoke record with the appropriate callbacks to record 20 samples and save the samples to disk.

function ui() 
  ...
  function completion(data) 
    ...
  
  record(20, completion, function(i, networks) 
    number.innerHTML = i
    console.log(" * [INFO] Collected sample " + i + " with " + networks.length + " networks")
  )
}

Finally, invoke the cli and ui functions where appropriate. Start by deleting the cli(); call at the bottom of the file.

function ui() 
    ...


cli();  // remove me

Check if the document object is globally accessible. If not, the script is being run from the command line. In this case, invoke the cli function. If it is, the script is loaded from within the desktop app. In this case, bind the click listener to the ui function.

if (typeof document == 'undefined') 
    cli();
 else 
    document.querySelector('#start-recording').addEventListener('click', ui)

Save and exit. Create a directory to hold our data.

mkdir data

Launch the desktop app.

npm start

You will see the following homepage. Click on “Add room”.




(Large preview)

You will see the following form. Type in a name for the room. Remember this name, as we will use this later on. Our example will be bedroom.


Add New Room page


“Add New Room” page on load (Large preview)

Click “Start recording,” and you will see the following status “Listening for wifi…”.


starting recording


“Add New Room” starting recording (Large Preview)

Once all 20 samples are recorded, your app will match the following. The status will read “Done.”




“Add New Room” page after recording is complete (Large preview)

Click on the misnamed “Cancel” to return to the homepage, which matches the following.


finished recording


“Add New Room” page after recording is complete (Large preview)

We can now scan wifi networks from the desktop UI, which will save all recorded samples to files on disk. Next, we will train an out-of-box machine learning algorithm-least squares on the data you have collected.

Step 4: Write Python Training Script

In this step, we will write a training script in Python. Create a directory for your training utilities.

mkdir model

Open model/train.py

nano model/train.py

At the top of your file, import the numpy computational library and scipy for its least squares model.

import numpy as np
from scipy.linalg import lstsq
import json
import sys

The next three utilities will handle loading and setting up data from the files on disk. Start by adding a utility function that flattens nested lists. You will use this to flatten a list of list of samples.

import sys

def flatten(list_of_lists):
    """Flatten a list of lists to make a list.
    >>> flatten([[1], [2], [3, 4]])
    [1, 2, 3, 4]
    """
    return sum(list_of_lists, [])

Add a second utility that loads samples from the specified files. This method abstracts away the fact that samples are spread out across multiple files, returning just a single generator for all samples. For each of the samples, the label is the index of the file. e.g., If you call get_all_samples('a.json', 'b.json'), all samples in a.json will have label 0 and all samples in b.json will have label 1.

def get_all_samples(paths):
  """Load all samples from JSON files."""
  for label, path in enumerate(paths):
  with open(path) as f:
    for sample in json.load(f)['samples']:
      signal_levels = [
        network['signal_level'].replace('RSSI', '') or 0
        for network in sample]
      yield [network['mac'] for network in sample], signal_levels, label

Next, add a utility that encodes the samples using a bag-of-words-esque model. Here is an example: Assume we collect two samples.

  1. wifi network A at strength 10 and wifi network B at strength 15
  2. wifi network B at strength 20 and wifi network C at strength 25.

This function will produce a list of three numbers for each of the samples: the first value is the strength of wifi network A, the second for network B, and the third for C. In effect, the format is [A, B, C].

  1. [10, 15, 0]
  2. [0, 20, 25]
def bag_of_words(all_networks, all_strengths, ordering):
  """Apply bag-of-words encoding to categorical variables.

  >>> samples = bag_of_words(
  ...     [['a', 'b'], ['b', 'c'], ['a', 'c']],
  ...     [[1, 2], [2, 3], [1, 3]],
  ...     ['a', 'b', 'c'])
  >>> next(samples)
  [1, 2, 0]
  >>> next(samples)
  [0, 2, 3]
  """
  for networks, strengths in zip(all_networks, all_strengths):
    yield [strengths[networks.index(network)]
      if network in networks else 0
      for network in ordering]

Using all three utilities above, we synthesize a collection of samples and their labels. Gather all samples and labels using get_all_samples. Define a consistent format ordering to one-hot encode all samples, then apply one_hot encoding to samples. Finally, construct the data and label matrices X and Y respectively.

def create_dataset(classpaths, ordering=None):
  """Create dataset from a list of paths to JSON files."""
  networks, strengths, labels = zip(*get_all_samples(classpaths))
  if ordering is None:
    ordering = list(sorted(set(flatten(networks))))
  X = np.array(list(bag_of_words(networks, strengths, ordering))).astype(np.float64)
  Y = np.array(list(labels)).astype(np.int)
  return X, Y, ordering

These functions complete the data pipeline. Next, we abstract away model prediction and evaluation. Start by defining the prediction method. The first function normalizes our model outputs, so that the sum of all values totals to 1 and that all values are non-negative; this ensures that the output is a valid probability distribution. The second evaluates the model.

def softmax(x):
  """Convert one-hotted outputs into probability distribution"""
  x = np.exp(x)
  return x / np.sum(x)


def predict(X, w):
  """Predict using model parameters"""
  return np.argmax(softmax(X.dot(w)), axis=1)

Next, evaluate the model’s accuracy. The first line runs prediction using the model. The second counts the numbers of times both predicted and true values agree, then normalizes by the total number of samples.

def evaluate(X, Y, w):
  """Evaluate model w on samples X and labels Y."""
  Y_pred = predict(X, w)
  accuracy = (Y == Y_pred).sum() / X.shape[0]
  return accuracy

This concludes our prediction and evaluation utilities. After these utilities, define a main function that will collect the dataset, train, and evaluate. Start by reading the list of arguments from the command line sys.argv; these are the rooms to include in training. Then create a large dataset from all of the specified rooms.

def main():
  classes = sys.argv[1:]

  train_paths = sorted(['data/{}_train.json'.format(name) for name in classes])
  test_paths = sorted(['data/{}_test.json'.format(name) for name in classes])
  X_train, Y_train, ordering = create_dataset(train_paths)
  X_test, Y_test, _ = create_dataset(test_paths, ordering=ordering)

Apply one-hot encoding to the labels. A one-hot encoding is similar to the bag-of-words model above; we use this encoding to handle categorical variables. Say we have 3 possible labels. Instead of labelling 1, 2, or 3, we label the data with [1, 0, 0], [0, 1, 0], or [0, 0, 1]. For this tutorial, we will spare the explanation for why one-hot encoding is important. Train the model, and evaluate on both the train and validation sets.

def main():
  ...
  X_test, Y_test, _ = create_dataset(test_paths, ordering=ordering)
  
  Y_train_oh = np.eye(len(classes))[Y_train]
  w, _, _, _ = lstsq(X_train, Y_train_oh)
  train_accuracy = evaluate(X_train, Y_train, w)
  test_accuracy = evaluate(X_test, Y_test, w)

Print both accuracies, and save the model to disk.

def main():
  ...
  print('Train accuracy ({}%), Validation accuracy ({}%)'.format(train_accuracy*100, test_accuracy*100))
  np.save('w.npy', w)
  np.save('ordering.npy', np.array(ordering))
  sys.stdout.flush()

At the end of the file, run the main function.

if __name__ == '__main__':
  main()

Save and exit. Double check that your file matches the following:

import numpy as np
from scipy.linalg import lstsq
import json
import sys


def flatten(list_of_lists):
    """Flatten a list of lists to make a list.
    >>> flatten([[1], [2], [3, 4]])
    [1, 2, 3, 4]
    """
    return sum(list_of_lists, [])


def get_all_samples(paths):
    """Load all samples from JSON files."""
    for label, path in enumerate(paths):
        with open(path) as f:
            for sample in json.load(f)['samples']:
                signal_levels = [
                    network['signal_level'].replace('RSSI', '') or 0
                    for network in sample]
                yield [network['mac'] for network in sample], signal_levels, label


def bag_of_words(all_networks, all_strengths, ordering):
    """Apply bag-of-words encoding to categorical variables.
    >>> samples = bag_of_words(
    ...     [['a', 'b'], ['b', 'c'], ['a', 'c']],
    ...     [[1, 2], [2, 3], [1, 3]],
    ...     ['a', 'b', 'c'])
    >>> next(samples)
    [1, 2, 0]
    >>> next(samples)
    [0, 2, 3]
    """
    for networks, strengths in zip(all_networks, all_strengths):
        yield [int(strengths[networks.index(network)])
            if network in networks else 0
            for network in ordering]


def create_dataset(classpaths, ordering=None):
    """Create dataset from a list of paths to JSON files."""
    networks, strengths, labels = zip(*get_all_samples(classpaths))
    if ordering is None:
        ordering = list(sorted(set(flatten(networks))))
    X = np.array(list(bag_of_words(networks, strengths, ordering))).astype(np.float64)
    Y = np.array(list(labels)).astype(np.int)
    return X, Y, ordering


def softmax(x):
    """Convert one-hotted outputs into probability distribution"""
    x = np.exp(x)
    return x / np.sum(x)


def predict(X, w):
    """Predict using model parameters"""
    return np.argmax(softmax(X.dot(w)), axis=1)


def evaluate(X, Y, w):
    """Evaluate model w on samples X and labels Y."""
    Y_pred = predict(X, w)
    accuracy = (Y == Y_pred).sum() / X.shape[0]
    return accuracy


def main():
    classes = sys.argv[1:]

    train_paths = sorted(['data/{}_train.json'.format(name) for name in classes])
    test_paths = sorted(['data/{}_test.json'.format(name) for name in classes])
    X_train, Y_train, ordering = create_dataset(train_paths)
    X_test, Y_test, _ = create_dataset(test_paths, ordering=ordering)

    Y_train_oh = np.eye(len(classes))[Y_train]
    w, _, _, _ = lstsq(X_train, Y_train_oh)
    train_accuracy = evaluate(X_train, Y_train, w)
    validation_accuracy = evaluate(X_test, Y_test, w)

    print('Train accuracy ({}%), Validation accuracy ({}%)'.format(train_accuracy*100, validation_accuracy*100))
    np.save('w.npy', w)
    np.save('ordering.npy', np.array(ordering))
    sys.stdout.flush()


if __name__ == '__main__':
    main()

Save and exit. Recall the room name used above when recording the 20 samples. Use that name instead of bedroom below. Our example is bedroom. We use -W ignore to ignore warnings from a LAPACK bug.

python -W ignore model/train.py bedroom

Since we’ve only collected training samples for one room, you should see 100% training and validation accuracies.

Train accuracy (100.0%), Validation accuracy (100.0%)

Next, we will link this training script to the desktop app.

In this step, we will automatically retrain the model whenever the user collects a new batch of samples. Open scripts/observe.js.

nano scripts/observe.js

Right after the fs import, import the child process spawner and utilities.

var fs = require('fs');
// start new code
const spawn = require("child_process").spawn;
var utils = require('./utils.js');

In the ui function, add the following call to retrain at the end of the completion handler.

function ui() 
  ...
  function completion() 
    ...
    retrain((data) => 
      var status = document.querySelector('#add-status');
      accuracies = data.toString().split('n')[0];
      status.innerHTML = "Retraining succeeded: " + accuracies
    );
  }
    ...
}

After the ui function, add the following retrain function. This spawns a child process that will run the python script. Upon completion, the process calls a completion handler. Upon failure, it will log the error message.

function ui() 
  ..


function retrain(completion) 
  var filenames = utils.get_filenames()
  const pythonProcess = spawn('python', ["./model/train.py"].concat(filenames));
  pythonProcess.stdout.on('data', completion);
  pythonProcess.stderr.on('data', (data) => 
    console.log(" * [ERROR] " + data.toString())
  )
}

Save and exit. Open scripts/utils.js.

nano scripts/utils.js

Add the following utility for fetching all datasets in data/.

var fs = require('fs');

module.exports = 
  get_filenames: get_filenames


function get_filenames() 
  filenames = new Set([]);
  fs.readdirSync("data/").forEach(function(filename) 
      filenames.add(filename.replace('_train', '').replace('_test', '').replace('.json', '' ))
  );
  filenames = Array.from(filenames.values())
  filenames.sort();
  filenames.splice(filenames.indexOf('.DS_Store'), 1)
  return filenames
}

Save and exit. For the conclusion of this step, physically move to a new location. There ideally should be a wall between your original location and your new location. The more barriers, the better your desktop app will work.

Once again, run your desktop app.

npm start

Just as before, run the training script. Click on “Add room”.


home page with button


Home page with “Add New Room” button available (Large preview)

Type in a room name that is different from your first room’s. We will use living room.


Add New Room page


“Add New Room” page on load (Large preview)

Click “Start recording,” and you will see the following status “Listening for wifi…”.




“Add New Room” starting recording for second room (Large preview)

Once all 20 samples are recorded, your app will match the following. The status will read “Done. Retraining model…”


finished recording 2


“Add New Room” page after recording for second room complete (Large preview)

In the next step, we will use this retrained model to predict the room you’re in, on the fly.

Step 6: Write Python Evaluation Script

In this step, we will load the pretrained model parameters, scan for wifi networks, and predict the room based on the scan.

Open model/eval.py.

nano model/eval.py

Import libraries used and defined in our last script.

import numpy as np
import sys
import json
import os
import json

from train import predict
from train import softmax
from train import create_dataset
from train import evaluate

Define a utility to extract the names of all datasets. This function assumes that all datasets are stored in data/ as <dataset>_train.json and <dataset>_test.json.

from train import evaluate

def get_datasets():
  """Extract dataset names."""
  return sorted(list(path.split('_')[0] for path in os.listdir('./data')
    if '.DS' not in path))

Define the main function, and start by loading parameters saved from the training script.

def get_datasets():
  ...

def main():
  w = np.load('w.npy')
  ordering = np.load('ordering.npy')

Create the dataset and predict.

def main():
  ...
  classpaths = [sys.argv[1]]
  X, _, _ = create_dataset(classpaths, ordering)
  y = np.asscalar(predict(X, w))

Compute a confidence score based on the difference between the top two probabilities.

def main():
  ...
  sorted_y = sorted(softmax(X.dot(w)).flatten())
  confidence = 1
  if len(sorted_y) > 1:
    confidence = round(sorted_y[-1] - sorted_y[-2], 2)

Finally, extract the category and print the result. To conclude the script, invoke the main function.

def main()
  ...
  category = get_datasets()[y]
  print(json.dumps("category": category, "confidence": confidence))

if __name__ == '__main__':
  main()

Save and exit. Double check your code matches the following (source code):

import numpy as np
import sys
import json
import os
import json

from train import predict
from train import softmax
from train import create_dataset
from train import evaluate


def get_datasets():
    """Extract dataset names."""
    return sorted(list(path.split('_')[0] for path in os.listdir('./data')
        if '.DS' not in path))


def main():
    w = np.load('w.npy')
    ordering = np.load('ordering.npy')

    classpaths = [sys.argv[1]]
    X, _, _ = create_dataset(classpaths, ordering)
    y = np.asscalar(predict(X, w))

    sorted_y = sorted(softmax(X.dot(w)).flatten())
    confidence = 1
    if len(sorted_y) > 1:
        confidence = round(sorted_y[-1] - sorted_y[-2], 2)

    category = get_datasets()[y]
    print(json.dumps("category": category, "confidence": confidence))


if __name__ == '__main__':
    main()

Next, we will connect this evaluation script to the desktop app. The desktop app will continuously run wifi scans and update the UI with the predicted room.

Step 7: Connect Evaluation To Desktop App

In this step, we will update the UI with a “confidence” display. Then, the associated NodeJS script will continuously run scans and predictions, updating the UI accordingly.

Open static/index.html.

nano static/index.html

Add a line for confidence right after the title and before the buttons.

<h1 class="title" id="predicted-room-name">(I dunno)</h1>
<!-- start new code -->
<p class="subtitle">with <span id="predicted-confidence">0%</span> confidence</p>
<!-- end new code -->
<div class="buttons">

Right after main but before the end of the body, add a new script predict.js.

</main>
  <!-- start new code -->
  <script>
  require('../scripts/predict.js')
  </script>
  <!-- end new code -->
</body>

Save and exit. Open scripts/predict.js.

nano scripts/predict.js

Import the needed NodeJS utilities for the filesystem, utilities, and child process spawner.

var fs = require('fs');
var utils = require('./utils');
const spawn = require("child_process").spawn;

Define a predict function which invokes a separate node process to detect wifi networks and a separate Python process to predict the room.

function predict(completion) 
  const nodeProcess = spawn('node', ["scripts/observe.js"]);
  const pythonProcess = spawn('python', ["-W", "ignore", "./model/eval.py", "samples.json"]);

After both processes have spawned, add callbacks to the Python process for both successes and errors. The success callback logs information, invokes the completion callback, and updates the UI with the prediction and confidence. The error callback logs the error.

function predict(completion) 
  ...
  pythonProcess.stdout.on('data', (data) => 
    information = JSON.parse(data.toString());
    console.log(" * [INFO] Room '" + information.category + "' with confidence '" + information.confidence + "'")
    completion()

    if (typeof document != "undefined") 
      document.querySelector('#predicted-room-name').innerHTML = information.category
      document.querySelector('#predicted-confidence').innerHTML = information.confidence
    
  });
  pythonProcess.stderr.on('data', (data) => 
    console.log(data.toString());
  )
}

Define a main function to invoke the predict function recursively, forever.

function main() 
  f = function()  predict(f) 
  predict(f)
}

main();

One last time, open the desktop app to see the live prediction.

npm start

Approximately every second, a scan will be completed and the interface will be updated with the latest confidence and predicted room. Congratulations; you have completed a simple room detector based on all in-range WiFi networks.

demo
Recording 20 samples inside the room and another 20 out in the hallway. Upon walking back inside, the script correctly predicts “hallway” then “bedroom.” (Large preview)

Conclusion

In this tutorial, we created a solution using only your desktop to detect your location within a building. We built a simple desktop app using Electron JS and applied a simple machine learning method on all in-range WiFi networks. This paves the way for Internet-of-things applications without the need for arrays of devices that are costly to maintain (cost not in terms of money but in terms of time and development).

Note: You can see the source code in its entirety on Github.

With time, you may find that this least squares does not perform spectacularly in fact. Try finding two locations within a single room, or stand in doorways. Least squares will be large unable to distinguish between edge cases. Can we do better? It turns out that we can, and in future lessons, we will leverage other techniques and the fundamentals of machine learning to better performance. This tutorial serves as a quick test bed for experiments to come.

Smashing Editorial
(ra, il)


This article is from - 

Building A Room Detector For IoT Devices On Mac OS

Thumbnail

8 Surefire Strategies to Boost Your Blog Conversions

blog-conversions-featured

In 2018, marketers are competing for an audience share that is both smart and digitally savvy. That means they can’t rely on the same old marketing strategy to carry blog conversions. If you’ve noticed stagnation in your conversion rate, it may be time to revise your strategy to boost conversions and keep your business growing. Be sure you’re incorporating these eight proven techniques to maximize your blog conversions. 1. Know your Target The most obvious element in conversion is a laser-like focus on the users you’re aiming to attract and convert. Are you posting articles like mad, but lacking the…

The post 8 Surefire Strategies to Boost Your Blog Conversions appeared first on The Daily Egg.

Originally posted here:  

8 Surefire Strategies to Boost Your Blog Conversions

Thumbnail

How to Use a Website Click Tracking Tool to Improve the User Experience

When it comes to understanding your audience, you can’t get more granular than a website click tracking tool. Instead of looking at big picture metrics, you can drill down to the basics and get to know what works with your audience — and what doesn’t. While many site tracking tools exist, website click tracking tools offer the most depth when you want to better understand user behavior. To see what I mean, visit a website you’ve never been to before. Just Google a broad topic such as “marathon training” or “best thriller novels.” It doesn’t matter. Click on one of…

The post How to Use a Website Click Tracking Tool to Improve the User Experience appeared first on The Daily Egg.

Jump to original: 

How to Use a Website Click Tracking Tool to Improve the User Experience