Tag Archives: data

Thumbnail

Introducing “The WebP Manual”




Introducing “The WebP Manual”

Markus Seyfferth



What’s WebP in the first place? Can we actually use it today? And if yes, how exactly? The role of media in performance, specifically images, is of huge concern. Images are powerful. Engaging visuals evoke visceral feelings. They can provide key information and context to articles, or merely add humorous asides. They do anything for us that plain text just can’t by itself.

But when there’s too much imagery, it can be frustrating for users on slow connections, or run afoul of data plan allowances. In the latter scenario, that can cost users real money. This sort of inadvertent trespass can carry real consequences.

In this eBook, you’ll learn all about WebP: what it’s capable of, how it performs, how to convert images to the format in a variety of ways, and most importantly, how to use it. Of course — the eBook is — and always will be, free for all Smashing Members.

84 pages. Written by Jeremy Wagner. Cover Design by Ricardo Gimenes. Available in PDF, Kindle, and ePub formats.

Smashing Book 6
$14.90Get the eBook

PDF, ePUB, Kindle.

$0.00 $14.90 Free for Members →

…along with 12 webinars and 56 other eBooks.

What’s In The eBook

This guide will encourage you to experiment and see what’s possible with WebP:

  • WebP Basics
    WebP images usually use less disk space when compared to other formats at reasonably comparable visual similarity. Depending on your site’s audience and the browsers they use, this is an opportunity to deliver less data-intensive user experiences for a significant segment of your audience.

  • Performance
    We’ll cover how both lossy and lossless WebP compare to JPEGs and PNGs exported by a number of image encoders.

  • Converting Images To WebP (Excerpt)
    This can be done in a myriad of ways, from something as simple as exporting from your preferred design program, by using Cloudinary and similar services, and even in Node.js-based build systems. Here, we’ll cover all avenues.

  • Using WebP Images
    Because WebP isn’t supported in all browsers just yet, you’ll need to learn how to use it that sites and applications gracefully fall back to established formats when WebP support is lacking. Here, we’ll discuss the many ways you can use WebP responsibly, starting by detecting browser support in the Accept request header.

About The Author

Dan Mall
Jeremy Wagner is a performance-obsessed front-end developer, author and speaker living and working in the frozen wastes of Saint Paul, Minnesota. He is also the author of Web Performance in Action, a web developer’s companion guide for creating fast websites. You can find him on Twitter @malchata, or read his blog of ramblings.

Here’s Why This eBook Is For You

The WebP Manual will get you ready for the new image format that is capable to significantly less data-intensive user experiences for a majority of your audience:

  • Learn how lossy and lossless WebP compare to JPEGs and PNGs exported by a number of image encoders.
  • Learn which services and plugins you can use to export or convert images to WebP with your preferred design tool or command line tool.
  • Learn how to can use WebP in production, and how to implement proper fallbacks for browsers that don’t support WebP just yet.
  • Learn how to use the full potential of the WebP format. It will substantially improve loading performance for many of your users, customers, and clients, and it will become one of your favorite tools for making websites as lean as possible.

The eBook is free for Smashing Members (you can cancel anytime, of course).

Smashing Book 6
$14.90Get the eBook

PDF, ePUB, Kindle.

$0.00 $14.90 Free for Members →

…along with 12 webinars and 56 other eBooks.


Visit link – 

Introducing “The WebP Manual”

Creative Data Visualization Techniques

Full-day workshop • June 28th
With so many tools available to visualize your data, it’s easy to get stuck in thinking about chart types, always just going for that bar or line chart, without truly thinking about effectiveness. In this workshop, Nadieh will teach you how you can take a more creative and practical approach to the design of data visualization.
Broaden your horizon on what is possible beyond the basic set of charts.

Follow this link:

Creative Data Visualization Techniques

Thumbnail

Building A Pub/Sub Service In-House Using Node.js And Redis




Building A Pub/Sub Service In-House Using Node.js And Redis

Dhimil Gosalia



Today’s world operates in real time. Whether it’s trading stock or ordering food, consumers today expect immediate results. Likewise, we all expect to know things immediately — whether it’s in news or sports. Zero, in other words, is the new hero.

This applies to software developers as well — arguably some of the most impatient people! Before diving into BrowserStack’s story, it would be remiss of me not to provide some background about Pub/Sub. For those of you who are familiar with the basics, feel free to skip the next two paragraphs.

Many applications today rely on real-time data transfer. Let’s look closer at an example: social networks. The likes of Facebook and Twitter generate relevant feeds, and you (via their app) consume it and spy on your friends. They accomplish this with a messaging feature, wherein if a user generates data, it will be posted for others to consume in nothing short of a blink. Any significant delays and users will complain, usage will drop, and if it persists, churn out. The stakes are high, and so are user expectations. So how do services like WhatsApp, Facebook, TD Ameritrade, Wall Street Journal and GrubHub support high volumes of real-time data transfers?

All of them use a similar software architecture at a high level called a “Publish-Subscribe” model, commonly referred to as Pub/Sub.

“In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.“

Wikipedia

Bored by the definition? Back to our story.

At BrowserStack, all of our products support (in one way or another) software with a substantial real-time dependency component — whether its automate tests logs, freshly baked browser screenshots, or 15fps mobile streaming.

In such cases, if a single message drops, a customer can lose information vital for preventing a bug. Therefore, we needed to scale for varied data size requirements. For example, with device logger services at a given point of time, there may be 50MB of data generated under a single message. Sizes like this could crash the browser. Not to mention that BrowserStack’s system would need to scale for additional products in the future.

As the size of data for each message differs from a few bytes to up to 100MB, we needed a scalable solution that could support a multitude of scenarios. In other words, we sought a sword that could cut all cakes. In this article, I will discuss the why, how, and results of building our Pub/Sub service in-house.

Through the lens of BrowserStack’s real-world problem, you will get a deeper understanding of the requirements and process of building your very own Pub/Sub.

Our Need For A Pub/Sub Service

BrowserStack has around 100M+ messages, each of which is somewhere between approximately 2 bytes and 100+ MB. These are passed around the world at any moment, all at different Internet speeds.

The largest generators of these messages, by message size, are our BrowserStack Automate products. Both have real-time dashboards displaying all requests and responses for each command of a user test. So, if someone runs a test with 100 requests where the average request-response size is 10 bytes, this transmits 1×100×10 = 1000 bytes.

Now let’s consider the larger picture as — of course — we don’t run just one test a day. More than approximately 850,000 BrowserStack Automate and App Automate tests are run with BrowserStack each and every day. And yes, we average around 235 request-response per test. Since users can take screenshots or ask for page sources in Selenium, our average request-response size is approximately 220 bytes.

So, going back to our calculator:

850,000×235×220 = 43,945,000,000 bytes (approx.) or only 43.945GB per day

Now let’s talk about BrowserStack Live and App Live. Surely we have Automate as our winner in form of size of data. However, Live products take the lead when it comes to the number of messages passed. For every live test, about 20 messages are passed each minute it turns. We run around 100,000 live tests, which each test averaging around 12 mins meaning:

100,000×12×20 = 24,000,000 messages per day

Now for the awesome and remarkable bit: We build, run, and maintain the application for this called pusher with 6 t1.micro instances of ec2. The cost of running the service? About $70 per month.

Choosing To Build vs. Buying

First things first: As a startup, like most others, we were always excited to build things in-house. But we still evaluated a few services out there. The primary requirements we had were:

  1. Reliability and stability,
  2. High performance, and
  3. Cost-effectiveness.

Let’s leave the cost-effectiveness criteria out, as I can’t think of any external services that cost under $70 a month (tweet me if know you one that does!). So our answer there is obvious.

In terms of reliability and stability, we found companies that provided Pub/Sub as a service with 99.9+ percent uptime SLA, but there were many T&C’s attached. The problem is not as simple as you think, especially when you consider the vast lands of the open Internet that lie between the system and client. Anyone familiar with Internet infrastructure knows stable connectivity is the biggest challenge. Additionally, the amount of data sent depends on traffic. For example, a data pipe that’s at zero for one minute may burst during the next. Services providing adequate reliability during such burst moments are rare (Google and Amazon).

Performance for our project means obtaining and sending data to all listening nodes at near zero latency. At BrowserStack, we utilize cloud services (AWS) along with co-location hosting. However, our publishers and/or subscribers could be placed anywhere. For example, it may involve an AWS application server generating much-needed log data, or terminals (machines where users can securely connect for testing). Coming back to the open Internet issue again, if we were to reduce our risk we would have to ensure our Pub/Sub leveraged the best host services and AWS.

Another essential requirement was the ability to transmit all types of data (Bytes, text, weird media data, etc.). With all considered, it did not make sense to rely on a third-party solution to support our products. In turn, we decided to revive our startup spirit, rolling up our sleeves to code our own solution.

Building Our Solution

Pub/Sub by design means there will be a publisher, generating and sending data, and a Subscriber accepting and processing it. This is similar to a radio: A radio channel broadcasts (publishes) content everywhere within a range. As a subscriber, you can decide whether to tune into that channel and listen (or turn off your radio altogether). 

Unlike the radio analogy where data is free for all and anyone can decide to tune in, in our digital scenario we need authentication which means data generated by the publisher could only be for a single particular client or subscriber.


Basic working of Pub/Sub


Basic working of Pub/Sub (Large preview)

Above is a diagram providing an example of a good Pub/Sub with:

  • Publishers
    Here we have two publishers generating messages based on pre-defined logic. In our radio analogy, these are our radio jockeys creating the content.
  • Topics
    There are two here, meaning there are two types of data. We can say these are our radio channels 1 and 2.
  • Subscribers
    We have three that each read data on a particular topic. One thing to notice is that Subscriber 2 is reading from multiple topics. In our radio analogy, these are the people who are tuned into a radio channel. 

Let’s start understanding the necessary requirements for the service.

  1. An evented component
    This kicks in only when there is something to kick in.
  2. Transient storage
    This keeps data persisted for a short duration so if the subscriber is slow, it still has a window to consume it.
  3. Reducing the latency
    Connecting two entities over a network with minimum hops and distance.

We picked a technology stack that fulfilled the above requirements:

  1. Node.js
    Because why not? Evented, we wouldn’t need heavy data processing, plus it’s easy to onboard.
  2. Redis
    Supports perfectly short-lived data. It has all the capabilities to initiate, update and auto-expire. It also puts less load on the application.

Node.js For Business Logic Connectivity

Node.js is a nearly perfect language when it comes to writing code incorporating IO and events. Our particular given problem had both, making this option the most practical for our needs.

Surely other languages such as Java could be more optimized, or a language like Python offers scalability. However, the cost of starting with these languages is so high that a developer could finish writing code in Node in the same duration. 

To be honest, if the service had a chance of adding more complicated features, we could have looked at other languages or a completed stack. But here it is a marriage made in heaven. Here is our package.json:


  "name": "Pusher",
  "version": "1.0.0",
  "dependencies": 
    "bstack-analytics": "*****", // Hidden for BrowserStack reasons. :)
    "ioredis": "^2.5.0",
    "socket.io": "^1.4.4"
  ,
  "devDependencies": {},
  "scripts": 
    "start": "node server.js"
  
}

Very simply put, we believe in minimalism especially when it comes to writing code. On the other hand, we could have used libraries like Express to write extensible code for this project. However, our startup instincts decided to pass on this and to save it for the next project. Additional tools we used:

  • ioredis
    This is one of the most supported libraries for Redis connectivity with Node.js used by companies including Alibaba.
  • socket.io
    The best library for graceful connectivity and fallback with WebSocket and HTTP.

Redis For Transient Storage

Redis as a service scales is heavily reliable and configurable. Plus there are many reliable managed service providers for Redis, including AWS. Even if you don’t want to use a provider, Redis is easy to get started with.

Let’s break down the configurable part. We started off with the usual master-slave configuration, but Redis also comes with cluster or sentinel modes. Every mode has its own advantages.

If we could share the data in some way, a Redis cluster would be the best choice. But if we shared the data by any heuristics, we have less flexibility as the heuristic has to be followed across. Fewer rules, more control is good for life!

Redis Sentinel works best for us as data lookup is done in just one node, connecting at a given point in time while data is not sharded. This also means that even if multiple nodes are lost, the data is still distributed and present in other nodes. So you have more HA and less chances of loss. Of course, this removed the pros from having a cluster, but our use case is different.

Architecture At 30000 Feet

The diagram below provides a very high-level picture of how our Automate and App Automate dashboards work. Remember the real-time system that we had from the earlier section?


BrowserStack’s real-time Automate and App Automate dashboards.


BrowserStack’s real-time Automate and App Automate dashboards (Large preview)

In our diagram, our main workflow is highlighted with thicker borders. The “automate” section consists of:

  1. Terminals
    Comprised of the pristine versions of Windows, OSX, Android or iOS that you get while testing on BrowserStack.
  2. Hub
    The point of contact for all your Selenium and Appium tests with BrowserStack.

The “user service” section here is our gatekeeper, ensuring data is sent to and saved for the right individual. It is also our security keeper. The “pusher” section incorporates the heart of what we discussed in this article. It consists of the usual suspects including:

  1. Redis
    Our transient storage for messages, where in our case automate logs are temporarily stored.
  2. Publisher
    This is basically the entity that obtains data from the hub. All your request responses are captured by this component which writes to Redis with session_id as the channel.
  3. Subscriber
    This reads data from Redis generated for the session_id. It is also the web server for clients to connect via WebSocket (or HTTP) to get data and then sends it to authenticated clients.

Finally, we have the user’s browser section, representing an authenticated WebSocket connection to ensure session_id logs are sent. This enables the front-end JS to parse and beautify it for users.

Similar to the logs service, we have pusher here that is being used for other product integrations. Instead of session_id, we use another form of ID to represent that channel. This all works out of pusher!

Conclusion (TLDR)

We’ve had considerable success in building out Pub/Sub. To sum up why we built it in-house:

  1. Scales better for our needs;
  2. Cheaper than outsourced services;
  3. Full control over the overall architecture.

Not to mention that JS is the perfect fit for this kind of scenario. Event loop and massive amount of IO is what the problem needs! JavaScript is magic of single pseudo thread. 

Events and Redis as a system keep things simple for developers, as you can obtain data from one source and push it to another via Redis. So we built it.

If the usage fits into your system, I recommend doing the same!

Smashing Editorial
(rb, ra, il)


See original article: 

Building A Pub/Sub Service In-House Using Node.js And Redis

Thumbnail

How to Convert Website Visitors into Customers (9 Effective Ways)

Figuring out how to convert website visitors into customers requires not only strategy, but extensive testing. You can learn from others what worked for them, but your website audience is unique. That’s probably why you’re reading this article. You want to know the best place to start. Then, you can test different solutions to increase your conversion rate. I’m a big fan of growth hacking. In other words, my goal is always to get the biggest possible results in the shortest possible time frame. That requires aggressive marketing and strategic application of data. You might take a different approach. Whatever…

The post How to Convert Website Visitors into Customers (9 Effective Ways) appeared first on The Daily Egg.

Read More: 

How to Convert Website Visitors into Customers (9 Effective Ways)

Thumbnail

The Ethics Of Persuasion




The Ethics Of Persuasion

Lyndon Cerejo



(This article is kindly sponsored by Adobe.) A few months ago, the world was shocked to learn that Cambridge Analytica had improperly used data from a harmless looking personality quiz on Facebook to profile and target the wider audience on the platform with advertisements to persuade them to vote a certain way. Only part of the data was obtained with consent (!), the data was stored by the app creator (!!), and it was sold to Cambridge Analytica in violation of terms of use (!!!). This was an example of black hat design, a deceptive use of persuasion tactics, combined with unethical use of personal information.

On the other hand, the last time you shopped on eBay, you may have noticed the use of multiple design elements encouraging you to buy an item (“last item”, “3 watched in the past day”). While these design techniques are used to persuade users, they are usually not deceptive and are considered white hat techniques.

A third example comes from Google I/O 2018 last month when the world heard Google Duplex make a call to a salon for an appointment and carry out a fluent conversation mimicking human mannerisms so well that the person at the other end did not realize she was talking to a machine. The machine did not misrepresent itself as human, nor did it identify itself as a machine, which, in my book, puts it in a gray area. What’s stopping this from being used in black hat design in the near future?


examples of persuasive tactics


Large preview

As you see from the three examples above, the use of persuasive tactics can fall anywhere on a spectrum from black hat at one extreme to white hat at the other, with a large fuzzy gray area separating the two. In today’s world of online and email scams, phishing attacks, and data breaches, users are increasingly cautious of persuasive tactics being used that are not in their best interest. Experience designers, developers, and creators are responsible for making decisions around the ethical nature of the tactics we use in our designs.

This article will present a brief history of persuasion, look at how persuasion is used with technology and new media, and present food for thought for designers and developers to avoid crossing the ethical line to the dark side of persuasion.

History Of Persuasion

Persuasion tactics and techniques are hardly new — they have been used for ages. Aristotle’s Rhetoric, over 2000 years ago, is one of the earliest documents on the art of persuasion. The modes of persuasion Aristotle presented were ethos (credibility), logos (reason), and pathos (emotion). He also discussed how kairos (opportune time) is important for the modes of persuasion.

Fast forward to today, and we see persuasion methods used in advertising, marketing, and communication all around us. When we try to convince someone of a point of view or win that next design client or project, chances are we are using persuasion: a process by which a person’s attitudes or behavior are, without duress, influenced by communications from other people (Encyclopedia Britannica).

While Aristotle first documented persuasion, Robert Cialdini’s Influence: The Psychology of Persuasion is more commonly referenced when talking about modern persuasion. According to Cialdini, there are six key principles of persuasion:

  1. Reciprocity
    People are obliged to give something back in exchange for receiving something.
  2. Scarcity
    People want more of those things they can have less of.
  3. Authority
    People follow the lead of credible, knowledgeable experts.
  4. Consistency
    People like to be consistent with the things they have previously said or done.
  5. Liking
    People prefer to say yes to those that they like.
  6. Consensus (Social Proof)
    Especially when they are uncertain, people will look to the actions and behaviors of others to determine their own.

We have all been exposed to one or more of these principles, and may recognize them in advertising or when interacting with others. While that has been around for ages, what is relatively new is the application of persuasion techniques to new technology and media. This started off with personal computers, became more prominent with the Internet, and is now pervasive with mobile devices.

Persuasion Through Technology And New Media

Behavior scientist B.J. Fogg is a pioneer when it comes to the role of technology in persuasion. Over two decades ago, he started exploring the overlap between persuasion and computing technology. This included interactive technologies like websites, software, and devices created for the purpose of changing people’s attitudes or behaviors. He referred to this field as captology, an acronym based on computers as persuasive technologies, and wrote the book on it, Persuasive Technology: Using Computers to Change What We Think and Do.


Captology describes the shaded area where computing technology and persuasion overlap


Captology describes the shaded area where computing technology and persuasion overlap (recreated from BJ Fogg’s CHI 98 paper, Persuasive Computers). (Large preview)

Interactive technologies have many advantages over traditional media because they are interactive. They also have advantages over humans because they can be more persistent (e.g. software update reminders), offer anonymity (great for sensitive topics), can access and manipulate large amounts of data (e.g. Amazon recommendations), can use many styles and modes (text, graphics, audio, video, animation, simulations), can easily scale, and are pervasive.

This last advantage is even more pronounced today, with mobile phones being an extension of our arms, and increased proliferation of smart devices, embedded computing, IoT, wearable technology, Augmented Reality, Virtual Reality, and virtual assistants powered by AI being embedded in anything and everything around us. In addition, today’s technological advances allow us to time and target moments of persuasion for high impact, since it is easy to know a user’s location, context, time, routine, and give them the ability to take action. This could be a reminder from your smartwatch to stand or move, or an offer from the coffee shop while you are a few blocks away.

Ethics And New Technology And Interactive Media

The use of persuasion in traditional media over the past decades has raised questions about the ethical use of persuasion. With new media and pervasive technology, there are more questions about the ethical use of persuasion, some of which are due to the advantages pervasive technology has over traditional media and humans. Anyone using persuasive methods to change people’s minds or behavior should have a thorough understanding of the ethical implications and impact of their work.

One of the key responsibilities of a designer during any design process is to be an advocate for the user. This role becomes even more crucial when persuasion techniques are intentionally used in design, since users may be unaware of the persuasion tactics. Even worse, some users may not be capable to detect these tactics, as may be the case with children, seniors or other vulnerable users.

BJ Fogg provides six factors that give interactive technologies an advantage over users when it comes to persuasion:

  1. Persuasive intent is masked by novelty
    The web and email are no longer novel, and most of us have wizened up to deceptive web practices and the promises of Nigerian Princes, but we still find novelty in new mobile apps, voice interfaces, AR, VR. Not too long ago, the craze with Pokémon Go raised many ethical questions.
  2. Positive reputation of new technology
    While “It must be true — I saw it on the Internet” is now a punchline, users are still being persuaded to like, comment, share, retweet, spread challenges, and make fake news or bot generated content viral.
  3. Unlimited persistence
    Would you like a used car salesman following you around after your first visit, continually trying to sell you a car? While that thankfully does not happen in real life, your apps and devices are with you all the time, and the ding and glowing screen have the ability to persistently persuade us, even in places and times that may be otherwise inappropriate. This past Lent, my son took a break from his mobile device. When he started it after Easter, he had hundreds of past notifications and alerts from one mobile game offering all sorts of reminders and incentives to come back and use it.
  4. Control over how the interaction unfolds
    Unlike human persuasion, where the person being persuaded has the ability to react and change course, technology has predefined options, controlled by the creators, designers and developers. When designing voice interfaces, creators have to define what their skill will be able to do, and for everything else come back with a “Sorry I can’t help with that”. Just last month, a social network blocked access to their mobile website, asking me to install their app to access their content, without an escape or dismiss option.
  5. Can affect emotion while still being emotionless
    New technology doesn’t have emotion. Even with the recent advances in Artificial Intelligence, machines do not feel emotion like humans do. Back to the Google Duplex assistant call mentioned at the beginning, issues can arise when people are not aware that the voice at the other end is just an emotionless machine, and treat it as another person just like them.
  6. Cannot take responsibility for negative outcomes of persuasion
    What happens when something goes wrong, and the app or the technology cannot take responsibility? Do the creators shoulder that responsibility, even if their persuasion strategies have unintended outcomes, or if misused by their partners? Mark Zuckerberg accepted responsibility for the Cambridge Analytica scandal before and during the congress hearings.

With these unfair advantages at our disposal, how do we, as creators, designers, and developers make ethical choices in our designs and solutions? For one, take a step back and consider the ethical implication and impact of our work, and then take a stand for our users.

Many designers are pushing back and being vocal about some of the ethically questionable nature of tech products and designs. There’s Tristan Harris, a former Google Design Ethicist, who has spoken out about how tech companies’ products hijack users’ minds. Sean Parker, Napster founder and former president of Facebook, described how Facebook was designed to exploit human “vulnerability”. And Basecamp’s Jonas Downey ruminates on how most software products are owned and operated by corporations, whose business interests often contradict their users’ interests.

Design Code Of Conduct

AIGA, the largest professional membership organization for design, has a series on Design Business and Ethics. Design Professionalism author Andy Rutledge also created a Professional Code of Conduct. Both are very detailed and cover the business of design, but not specifically ethics related to design that impacts or influences human behavior.

Other professionals who impact the human mind have ethical principles and codes of conduct, like those published by the American Psychological Association and the British Psychological Society. The purpose of these codes of conduct is to protect participants as well as the reputation of psychology and psychologists themselves. When using psychology in our designs, we could examine how the ethical principles of psychologists are applicable to our work as creators, designers, and developers.

Principles And Questions

Using the Ethical Principles of Psychologists as a framework, I defined how each principle applies to persuasive design and listed questions related to ethical implications of design. These are by no means exhaustive but are intended to be food for thought in each of these areas. Note: When you see ‘design’ in the questions below, it refers to persuasive techniques used in your design, app, product or solution.

Principle A: Beneficence And Nonmaleficence

Do no harm. Your decisions may affect the minds, behavior, and lives of your users and others around them, so be alert and guard against misusing the influence of your designs.

  • Does your design change the way people interact for the better?
  • Does the design aim to keep users spending time they didn’t intend to?
  • Does the design make it easy to access socially unacceptable or illegal items that your users would not have easy access to otherwise?
  • How may your partners (including third-party tools and SDKs) or “bad guys” misuse your design, unknown to you?
  • Would you be comfortable with someone else using your design on you?
  • Would you like someone else to use this design to persuade your mother or your child?

Principle B: Fidelity And Responsibility

Be aware of your responsibility to your intended users, unintended users and society at large. Accept appropriate responsibility for the outcomes of your design.

  • During design, follow up answers to “How might we…?” with “At what cost?”
  • What is the impact of your design/product/solution? Who or what does it replace or impact?
  • If your design was used opposite from your intended use, what could the impact be?
  • Does your design change social norms, etiquette or traditions for the better?
  • Will the design put users in harm’s way or make them vulnerable, intentionally or unintentionally (Study Estimates That Pokémon GO Has Caused More Than 100,000 Traffic Accidents)? How can it be prevented?

Principle C: Integrity

Promote accuracy, honesty, and truthfulness in your designs. Do not cheat, misrepresent or engage in fraud. When deception may be ethically justifiable to maximize benefits and minimize harm, carefully consider the need for, the possible consequences of, and be responsible for correcting any resulting mistrust or other harmful effects that arise from the use of such techniques.

  • Do you need users’ consent? When asking for their consent, are they aware of what exactly they are consenting to?
  • What’s the intent of the design? Is it in the best interest of the user or the creator? Are you open and transparent about your intentions?
  • Does your design use deception, manipulation, misrepresentation, threats, coercion or other dishonest techniques?
  • Are users aware or informed if they are being monitored, or is it covert?
  • Is your design benefiting you or the creators at the expense of your users?
  • What would a future whistleblower say about you and your design?

Principle D: Justice

Exercise reasonable judgment and take precautions to ensure that your potential biases, the limitations of your expertise does not lead to, or condone unjust practices. Your design should benefit both the creators and users.

  • Does your design contain any designer biases built in (gender, political, or other)?
  • Does your design advocate hate, violence, crime, propaganda?
  • If you did this in person, without technology, would it be considered ethical?
  • What are the benefits to the creators/business? What are the benefits to the users? Are the benefits stacked in favor of the business?
  • Do you make it easy for users to disconnect? Do users have control and the ability to stop, without being subject to further persuasion through other channels?

Principle E: Respect For People’s Rights And Dignity

Respect the dignity and worth of all people, and the rights of individuals to privacy, and confidentiality. Special safeguards may be necessary to protect the rights and welfare of vulnerable users.

  • Are your designs using persuasion with vulnerable users (children, seniors, poor)?
  • Does your design protect users’ privacy and give them control over their settings?
  • Does the design require unnecessary permissions to work?
  • Can your design use a less in-your-face technique to get the same outcome? (e.g. speed monitors on roads instead of surveillance)
  • Does your design make your users a nuisance to others? How can you prevent that?

Conclusion

If you have been designing with white hat techniques, you may appreciate the ethical issues discussed here. However, if you have been designing in the grey or black area, thank you for making it all the way to the end. Ethics in persuasive design are important because they don’t prey on the disadvantages users have when it comes to interactive technology. As creators, designers, and developers, we have a responsibility to stand up for our users.

Do good. Do no harm. Design ethically.

Resources

Books

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(ra, il, yk)


Read the article: 

The Ethics Of Persuasion

Thumbnail

What Happens After The Conversion? How To Optimize Your Marketing Campaigns For Higher Quality Leads

How excited would you be if you doubled the number of leads your marketing campaign was generating in less than a month? What if you found out that the improvement wasn’t an improvement at all, because as lead quantity went up, lead quality was going down? That’s exactly what happened with a campaign I ran once. I can assure you – it’s not fun! One survey of B2B marketers found that their #1 and #2 challenges were generating high quality leads and converting leads into customers: Your Landing Page Conversion Rate Is Only Half Of The Story Converting visitors to leads…

The post What Happens After The Conversion? How To Optimize Your Marketing Campaigns For Higher Quality Leads appeared first on The Daily Egg.

Visit source: 

What Happens After The Conversion? How To Optimize Your Marketing Campaigns For Higher Quality Leads

Thumbnail

Building A Central Logging Service In-House




Building A Central Logging Service In-House

Akhil Labudubariki



We all know how important debugging is for improving application performance and features. BrowserStack runs one million sessions a day on a highly distributed application stack! Each involves several moving parts, as a client’s single session can span multiple components across several geographic regions.

Without the right framework and tools, the debugging process can be a nightmare. In our case, we needed a way to collect events happening during different stages of each process in order to get an in-depth understanding of everything taking place during a session. With our infrastructure, solving this problem became complicated as each component might have multiple events from their lifecycle of processing a request.

That’s why we developed our own in-house Central Logging Service tool (CLS) to record all important events logged during a session. These events help our developers identify conditions where something goes wrong in a session and helps keep track of certain key product metrics.

Debugging data ranges from simple things like API response latency to monitoring a user’s network health. In this article, we share our story of building our CLS tool which collects 70G of relevant chronological data per day from 100+ components reliably, at scale and with two M3.large EC2 instances.

The Decision To Build In-House

First, let’s consider why we built our CLS tool in-house rather than used an existing solution. Each of our sessions sends 15 events on average, from multiple components to the service – translating into approximately 15 million total events per day.

Our service needed the ability to store all this data. We sought a complete solution to support event storing, sending and querying across events. As we considered third-party solutions such as Amplitude and Keen, our evaluation metrics included cost, performance in handling high parallel requests and ease of adoption. Unfortunately, we could not find a fit that met all our requirements within budget – although benefits would have included saving time and minimizing alerts. While it would take additional effort, we decided to develop an in-house solution ourselves.


Building in-house


One of the biggest issues with building In-house is the amount of resources that we need to spend to maintain it. (Image credit: Source: Digiday)

Technical Details

In terms of architecting for our component, we outlined the following basic requirements:

  • Client Performance
    Does not impact the performance of the client/component sending the events.
  • Scale
    Able to handle a high number of requests in parallel.
  • Service performance
    Quick to process all events being sent to it.
  • Insight into data
    Each event logged needs to have some meta information to be able to uniquely identify the component or user, account or message and give more information to help the developer debug faster.
  • Queryable interface
    Developers can query all events for a particular session, helping to debug a particular session, build component health reports, or generate meaningful performance statistics of our systems.
  • Faster and easier adoption
    Easy integration with an existing or new component without burdening teams and taking up their resources.
  • Low maintenance
    We are a small engineering team, so we sought a solution to minimize alerts!

Building Our CLS Solution

Decision 1: Choosing An Interface To Expose

In developing CLS, we obviously didn’t want to lose any of our data, but we didn’t want component performance to take a hit either. Not to mention the additional factor of preventing existing components from becoming more complicated, since it would delay overall adoption and release. In determining our interface, we considered the following choices:

  1. Storing events in local Redis in each component, as a background processor pushes it to CLS. However, this requires a change in all components, along with an introduction of Redis for components which didn’t already contain it.
  2. A Publisher – Subscriber model, where Redis is closer to the CLS. As everyone publishes events, again we have the factor of components running across the globe. During the time of high-traffic, this would delay components. Further, this write could intermittently jump up to five seconds (due to the internet alone).
  3. Sending events over UDP, which offers a lesser impact on application performance. In this case data would be sent and forgotten, however, the disadvantage here would be data loss.

Interestingly, our data loss over UDP was less than 0.1 percent, which was an acceptable amount for us to consider building such a service. We were able to convince all teams that this amount of loss was worth the performance, and went ahead to leverage a UDP interface that listened to all events being sent.

While one result was a smaller impact on an application’s performance, we did face an issue as UDP traffic was not allowed from all networks, mostly from our users’ – causing us in some cases to receive no data at all. As a workaround, we supported logging events using HTTP requests. All events coming from the user’s side would be sent via HTTP, whereas all events being recorded from our components would be via UDP.

Decision 2: Tech Stack (Language, Framework & Storage)

We are a Ruby shop. However, we were uncertain if Ruby would be a better choice for our particular problem. Our service would have to handle a lot of incoming requests, as well as process a lot of writes. With the Global Interpreter lock, achieving multithreading or concurrency would be difficult in Ruby (please don’t take offense – we love Ruby!). So we needed a solution that would help us achieve this kind of concurrency.

We were also keen to evaluate a new language in our tech stack, and this project seemed perfect for experimenting with new things. That’s when we decided to give Golang a shot since it offered inbuilt support for concurrency and lightweight threads and go-routines. Each logged data point resembles a key-value pair where ‘key’ is the event and ‘value’ serves as its associated value.

But having a simple key and value is not enough to retrieve a session related data – there is more metadata to it. To address this, we decided any event needing to be logged would have a session ID along with its key and value. We also added extra fields like timestamp, user ID and the component logging the data, so that it became more easy to fetch and analyze data.

Now that we decided on our payload structure, we had to choose our datastore. We considered Elastic Search, but we also wanted to support update requests for keys. This would trigger the entire document to be re-indexed, which might affect the performance of our writes. MongoDB made more sense as a datastore since it would be easier to query all events based on any of the data fields that would be added. This was easy!

Decision 3: DB Size Is Huge And Query And Archiving Sucks!

In order to cut maintenance, our service would have to handle as many events as possible. Given the rate that BrowserStack releases features and products, we were certain the number of our events would increase at higher rates over time, meaning our service would have to continue to perform well. As space increases, reads and writes take more time – which could be a huge hit on the service’s performance.

The first solution we explored was moving logs from a certain period away from the database (in our case, we decided on 15 days). To do this, we created a different database for each day, allowing us to find logs older than a particular period without having to scan all written documents. Now we continually remove databases older than 15 days from Mongo, while of course keeping backups just in case.

The only leftover piece was a developer interface to query session-related data. Honestly, this was the easiest problem to solve. We provide an HTTP interface, where people can query for session related events in the corresponding database in the MongoDB, for any data having a particular session ID.

Architecture

Let’s talk about the internal components of the service, considering the following points:

  1. As previously discussed, we needed two interfaces – one listening over UDP and another listening over HTTP. So we built two servers, again one for each interface, to listen for events. As soon as an event arrives, we parse it to check whether it has the required fields – these are session ID, key, and value. If it does not, the data is dropped. Otherwise, the data is passed over a Go channel to another goroutine, whose sole responsibility is to write to the MongoDB.
  2. A possible concern here is writing to the MongoDB. If writes to the MongoDB are slower than the rate data is received, this creates a bottleneck. This, in turn, starves other incoming events and means dropped data. The server, therefore, should be fast in processing incoming logs and be ready to process ones upcoming. To address the issue, we split the server into two parts: the first receives all events and queues them up for the second, which processes and writes them into the MongoDB.
  3. For queuing we chose Redis. By dividing the entire component into these two pieces we reduced the server’s workload, giving it room to handle more logs.
  4. We wrote a small service using Sinatra server to handle all the work of querying MongoDB with given parameters. It returns an HTML/JSON response to developers when they need information on a particular session.

All these processes happily run on a single m3.large instance.


CLS v1


CLS v1: A representation of the system’s first architecture. All the components are running on one single machine.

Feature Requests

As our CLS tool saw more use over time, it needed more features. Below, we discuss these and how they were added.

Missing Metadata

Gradually as the number of components in BrowserStack increases, we’ve demanded more from CLS. For example, we needed the ability to log events from components lacking a session ID. Otherwise obtaining one would burden our infrastructure, in the form of affecting application performance and incurring traffic on our main servers.

We addressed this by enabling event logging using other keys, such as terminal and user IDs. Now whenever a session is created or updated, CLS is informed with the session ID, as well as the respective user and terminal IDs. It stores a map that can be retrieved by the process of writing to MongoDB. Whenever an event that contains either the user or terminal ID is retrieved, the session ID is added.

Handle Spamming (Code Issues In Other Components)

CLS also faced the usual difficulties with handling spam events. We often found deploys in components that generated a huge volume of requests sent to CLS. Other logs would suffer in the process, as the server became too busy to process these and important logs were dropped.

For the most part, most of the data being logged were via HTTP requests. To control them we enable rate limiting on nginx (using the limit_req_zone module), which blocks requests from any IP we found hitting requests more than a certain number in a small amount of time. Of course, we do leverage health reports on all blocked IPs and inform the responsible teams.

Scale v2

As our sessions per day increased, data being logged to CLS was also increasing. This affected the queries our developers were running daily, and soon the bottleneck we had was with the machine itself. Our setup consisted of two core machines running all of the above components, along with a bunch of scripts to query Mongo and keep track of key metrics for each product. Over time, data on the machine had increased heavily and scripts began to take a lot of CPU time. Even after trying to optimizing Mongo queries, we always came back to the same issues.

To solve this, we added another machine for running health report scripts and the interface to query these sessions. The process involved booting a new machine and setting up a slave of the Mongo running on the main machine. This has helped reduce the CPU spikes we saw every day caused by these scripts.


CLS v2


CLS v2: A representation of the current system’s architecture. Logs are written to the master machine and they are synced on the slave machine. Developer’s queries run on the slave machine.

Conclusion

Building a service for a task as simple as data logging can get complicated, as the amount of data increases. This article discusses the solutions we explored, along with challenges faced while solving this problem. We experimented with Golang to see how well it would fit with our ecosystem, and so far we have been satisfied. Our choice to create an internal service rather than paying for an external one has been wonderfully cost-efficient. We also didn’t have to scale our setup to another machine until much later – when the volume of our sessions increased. Of course, our choices in developing CLS were completely based on our requirements and priorities.

Today CLS handles up to 15 million events every day, constituting up to 70 GB of data. This data is being used to help us solve any issues our customers face during any session. We also use this data for other purposes. Given the insights each session’s data provides on different products and internal components, we’ve begun leveraging this data to keep track of each product. This is achieved by extracting the key metrics for all the important components.

All in all, we’ve seen great success in building our own CLS tool. If it makes sense for you, I recommend you consider doing the same!

Smashing Editorial
(rb, ra, il)


Jump to original:  

Building A Central Logging Service In-House

Thumbnail

Are People Watching Your Landing Page Videos? Here’s How to use Google Tag Manager to Check


In 2018, video marketing has become ubiquitous in news feeds and it’s one of the best tools for persuasion you have available to you. In fact, 72% of businesses say video has improved their conversion rates. Naturally, because your landing pages are designed to persuade and convert, it makes total sense you’d want to use videos to boost the power of your offer.

But how do you know if visitors are actually interacting with your landing page videos? If you’re spending money on producing video content (especially if it’s offer-specific), you’ll want to know if your target audience is engaging.

While some of you may have access to a video marketing platform and resulting analytics, this post is going to share how you can get view information for YouTube video players using the free tool Google Tag Manager.

Once you follow the steps below for your Unbounce pages, you’ll be able to see:

  • If visitors are actually watching the videos on your landing pages
  • The duration of how long visitors are watching for, and
  • Where visitors are dropping off (this can help you understand what content to modify to keep visitors engaged).

First up: Add Google Tag Manager to Track Your Landing Page Videos

This is really easy to do in Unbounce. First:

  1. Head to the Script Manager under your Settings tab.
  2. Then, click the green “Add a Script” button.
  3. Next, select the Google Tag Manager option.
  4. Assuming you’ve already signed up for Google Tag Manager, you can add your Container ID.

Set up of Google Tag Manager

Lastly, attach your domain to the script, and you’re all set!

Once you have the script saved, use Google Tag Assistant to confirm the tag is working. After setting up this Tag Manager, next we’ll want to define how we want to track user interactions with our YouTube embeds, which brings us to…

Create Tags to Track Video Engagement

On September 12, 2017, Google Tag Manager released the YouTube Video Trigger which finally gave marketers the opportunity to track engagement from embedded YouTube videos within Google Analytics. Tag Manager added built-in video variables, and we want to confirm they are selected before creating any tags or triggers.

When you get to the Variables page in Google Tag Manager:

  • click on the red Configure button, and simply check the boxes for all the video variables, as seen in the image below:

Configuring Built in Variables

Next, we can create our trigger. Triggers control how the tag will be fired. The only option we need is the YouTube Video trigger type.

From here you can select the specific information you want to capture. These actions include when a user starts a video, completes a video, pause/seeking/buffering, and the duration of how much of the content they actually watch.

See how people are engaging (or not) with landing page videos

In the image above, we see just one option of a trigger you can create. If you choose to select ‘Progress’, you have to choose either Percentages or Time Thresholds. It has to be one or the other. You can’t do both. Using Percentages, you can add any number you like (i.e. it doesn’t have to be the numbers I used in the example above). Tag Manager will automatically add 100 for a completion.

On the other hand, if you choose ‘Time thresholds’, you will add the numbers (in seconds) you’d like to have recorded in Google Analytics. If your campaign focus is on views, I’d stick with Percentages. But, if you want to see where users are dropping off to help you improve the content of your videos, Time Thresholds is a good choice.

Lastly, choose when the trigger will fire. By default Tag Manager will fire the trigger on all videos, but you can choose to fire on only some videos.

You can also make your video triggers a lot more specific. The image below shows several options you have to fire the tag on a variety of custom variables for your YouTube videos. If you only want to track videos on certain landing pages, you can do that, but if you only want to track certain videos no matter what the landing page is, you have that option too. Create the trigger which will give you the data you need to make better decisions about the videos on your landing pages.

Now let’s set up the tag!

The image below is just one example of a completed tag set up. Here, you can change the Category, Action, and Label to capture the appropriate video data you want to collect. You can also research and find some cool custom versions of these tags like Simo Ahava’s YouTube Video Trigger. There are many options out there, so find the tag which works best for you.

Now that we can track the YouTube video interactions, let’s view the data.

View the Events Report in Google Analytics

In Google Analytics, head over to Behavior > Events. In the Overview or Top Events sections, you can see the Event Category lists of whatever you are tracking. While Event Category is the default view, you can switch to Event Action or Event Label to get deeper data depending on how you set up your tag.

So, how do you relate YouTube video tracking with our landing pages? Easy. Click on Secondary dimension, search for “landing pages” and select it. From here you’ll be able to see the page URL path alongside the current view you have pulled up.

We now have the data in Google Analytics to view which videos users interact with the most, how long are users watching the embedded YouTube videos, and which landing pages are actually seeing video engagement.

Now You Have Data to Improve the Videos on Your Landing Pages

If you find visitors barely watch your videos (think viewing less than 30% of the content), you now have data to push your team to modify the length of the videos, for example, or get to your key message differently (perhaps you have a really long intro?).

If the data shows users aren’t watching your videos at all, you may want to replace the video on your landing page with other, more customized options, or even text that sums up the value props presented. Finally, if you identify really popular videos, it could be a great opportunity to determine if there are opportunities for reuse on other relevant pages, too.

Overall, you won’t know whether page visitors resonate with the videos on your landing pages unless you track this. Let me know in the comments below if you have any questions on the setup above – happy to jump in with answers.

Taken from: 

Are People Watching Your Landing Page Videos? Here’s How to use Google Tag Manager to Check

Hit The Ground Running With Vue.js And Firestore




Hit The Ground Running With Vue.js And Firestore

Lukas van Driel



Google Firebase has a new data storage possibility called ‘Firestore’ (currently in beta stage) which builds on the success of the Firebase Realtime Database but adds some nifty features. In this article, we’ll set up the basics of a web app using Vue.js and Firestore.

Let’s say you have this great idea for a new product (e.g. the next Twitter, Facebook, or Instagram, because we can never have too much social, right?). To start off, you want to make a prototype or Minimum Viable Product (MVP) of this product. The goal is to build the core of the app as fast as possible so you can show it to users and get feedback and analyze usage. The emphasis is heavily on development speed and quick iterating.

But before we start building, our amazing product needs a name. Let’s call it “Amazeballs.” It’s going to be legen — wait for it — dary!

Here’s a shot of how I envision it:


Screenshot of Amazeballs app


The legendary Amazeballs app

Our Amazeballs app is — of course — all about sharing cheesy tidbits of your personal life with friends, in so-called Balls. At the top is a form for posting Balls, below that are your friends’ Balls.

When building an MVP, you’ll need tooling that gives you the power to quickly implement the key features as well as the flexibility to quickly add and change features later on. My choice falls on Vue.js as it’s a Javascript-rendering framework, backed by the Firebase suite (by Google) and its new real-time database called Firestore.

Firestore can directly be accessed using normal HTTP methods which makes it a full backend-as-a-service solution in which you don’t have to manage any of your own servers but still store data online.

Sounds powerful and daunting, but no sweat, I’ll guide you through the steps of creating and hosting this new web app. Notice how big the scrollbar is on this page; there’s not a huge amount of steps to it. Also, if you want to know where to put each of the code snippets in a code repository, you can see a fully running version of Amazeballs on github.

Let’s Start

We’re starting out with Vue.js. It’s great for Javascript beginners, as you start out with HTML and gradually add logic to it. But don’t underestimate; it packs a lot of powerful features. This combination makes it my first choice for a front-end framework.

Vue.js has a command-line interface (CLI) for scaffolding projects. We’ll use that to get the bare-bones set-up quickly. First, install the CLI, then use it to create a new project based on the “webpack-simple” template.

npm install -g vue-cli

vue init webpack-simple amazeballs

If you follow the steps on the screen (npm install and npm run dev) a browser will open with a big Vue.js logo.

Congrats! That was easy.

Next up, we need to create a Firebase project. Head on over to https://console.firebase.google.com/ and create a project. A project starts out in the free Spark plan, which gives you a limited database (1 GB data, 50K reads per day) and 1 GB of hosting. This is more than enough for our MVP, and easily upgradable when the app gains traction.

Click on the ‘Add Firebase to your web app’ to display the config that you need. We’ll use this config in our application, but in a nice Vue.js manner using shared state.

First npm install firebase, then create a file called src/store.js. This is the spot that we’re going to put the shared state in so that each Vue.js component can access it independently of the component tree. Below is the content of the file. The state only contains some placeholders for now.

import Vue from 'vue';
import firebase from 'firebase/app';
import 'firebase/firestore';

// Initialize Firebase, copy this from the cloud console
// Or use mine :)
var config = 
  apiKey: "AIzaSyDlRxHKYbuCOW25uCEN2mnAAgnholag8tU",
  authDomain: "amazeballs-by-q42.firebaseapp.com",
  databaseURL: "https://amazeballs-by-q42.firebaseio.com",
  projectId: "amazeballs-by-q42",
  storageBucket: "amazeballs-by-q42.appspot.com",
  messagingSenderId: "972553621573"
;
firebase.initializeApp(config);

// The shared state object that any vue component can get access to. 
// Has some placeholders that we’ll use further on!
export const store = 
  ballsInFeed: null,
  currentUser: null,
  writeBall: (message) => console.log(message)
;

Now we’ll add the Firebase parts. One piece of code to get the data from the Firestore:

// a reference to the Balls collection
const ballsCollection = firebase.firestore()
  .collection('balls');

// onSnapshot is executed every time the data
// in the underlying firestore collection changes
// It will get passed an array of references to 
// the documents that match your query
ballsCollection
  .onSnapshot((ballsRef) => 
    const balls = [];
    ballsRef.forEach((doc) => 
      const ball = doc.data();
      ball.id = doc.id;
      balls.push(ball);
    );
    store.ballsInFeed = balls;
  });

And then replace the writeBall function with one that actually executes a write:

writeBall: (message) => ballsCollection.add(
  createdOn: new Date(),
  author: store.currentUser,
  message
)

Notice how the two are completely decoupled. When you insert into a collection, the onSnapshot is triggered because you’ve inserted an item. This makes state management a lot easier.

Now you have a shared state object that any Vue.js component can easily get access to. Let’s put it to good use.

Post Stuff!

First, let’s find out who the current user is.

Firebase has authentication APIs that help you with the grunt of the work of getting to know your user. Enable the appropriate ones on the Firebase Console in AuthenticationSign In Method. For now, I’m going to use Google Login — with a very non-fancy button.


Screenshot of login page with Google authentication


Authentication with Google Login

Firebase doesn’t give you any interface help, so you’ll have to create your own “Login with Google/Facebook/Twitter” buttons, and/or username/password input fields. Your login component will probably look a bit like this:

<template>
  <div>
    <button @click.prevent="signInWithGoogle">Log in with Google</button>
  </div>
</template>

<script>
import firebase from 'firebase/app';
import 'firebase/auth';

export default 
  methods: 
    signInWithGoogle() 
      var provider = new firebase.auth.GoogleAuthProvider();
      firebase.auth().signInWithPopup(provider);
    
  }
}
</script>

Now there’s one more piece of the login puzzle, and that’s getting the currentUser variable in the store. Add these lines to your store.js:

// When a user logs in or out, save that in the store
firebase.auth().onAuthStateChanged((user) => 
  store.currentUser = user;
);

Due to these three lines, every time the currently-logged-in user changes (logs in or out), store.currentUser also changes. Let’s post some Balls!


Screenshot of logout option


Login state is stored in the store.js file

The input form is a separate Vue.js component that is hooked up to the writeBall function in our store, like this:

<template>
  <form @submit.prevent="formPost">
    <textarea v-model="message" />
    <input type="submit" value="DUNK!" />
  </form>
</template>

<script>
import  store  from './store';

export default 
  data() 
    return 
      message: null,
    ;
  },
  methods: 
    formPost() 
      store.writeBall(this.message);
    
  },
}
</script>

Awesome! Now people can log in and start posting Balls. But wait, we’re missing authorization. We want you to only be able to post Balls yourself, and that’s where Firestore Rules come in. They’re made up of Javascript-ish code that defines access privileges to the database. You can enter them via the Firestore console, but you can also use the Firebase CLI to install them from a file on disk. Install and run it like this:

npm install -g firebase-tools
firebase login
firebase init firestore

You’ll get a file named firestore.rules where you can add authorization for your app. We want every user to be able to insert their own balls, but not to insert or edit someone else’s. The below example do nicely. It allows everyone to read all documents in the database, but you can only insert if you’re logged in, and the inserted resource has a field “author” that is the same as the currently logged in user.

service cloud.firestore 
  match /databases/database/documents 
    match /document=** 
      allow read: if true;
      allow create: if request.auth.uid != null && request.auth.uid == request.resource.data.author;
    
  }
}

It looks like just a few lines of code, but it’s very powerful and can get complex very quickly. Firebase is working on better tooling around this part, but for now, it’s trial-and-error until it behaves the way you want.

If you run firebase deploy, the Firestore rules will be deployed and securing your production data in seconds.

Adding Server Logic

On your homepage, you want to see a timeline with your friends’ Balls. Depending on how you want to determine which Balls a user sees, performing this query directly on the database could be a performance bottleneck. An alternative is to create a Firebase Cloud Function that activates on every posted Ball and appends it to the walls of all the author’s friends. This way it’s asynchronous, non-blocking and eventually consistent. Or in other words, it’ll get there.

To keep the examples simple, I’ll do a small demo of listening to created Balls and modifying their message. Not because this is particularly useful, but to show you how easy it is to get cloud functions up-and-running.

const functions = require('firebase-functions');

exports.createBall = functions.firestore
  .document('balls/ballId')
  .onCreate(event => 
    var createdMessage = event.data.get('message');
    return event.data.ref.set(
      message: createdMessage + ', yo!'
    , merge: true);
});

Oh, wait, I forgot to tell you where to write this code.

firebase init functions

This creates the functions directory with an index.js. That’s the file you can write your own Cloud Functions in. Or copy-paste mine if you’re very impressed by it.

Cloud Functions give you a nice spot to decouple different parts of your application and have them asynchronously communicate. Or, in architectural drawing style:


Server logic architectural diagram of Cloud Functions


Asynchronous communication between the different components of your application

Last Step: Deployment

Firebase has its Hosting option available for this, and you can use it via the Firebase CLI.

firebase init hosting

Choose distas a public directory, and then ‘Yes’ to rewrite all URLs to index.html. This last option allows you to use vue-router to manage pretty URLs within your app.

Now there’s a small hurdle: the dist folder doesn’t contain an index.html file that points to the right build of your code. To fix this, add an npm script to your package.json:


  "scripts": 
    "deploy": "npm run build && mkdir dist/dist && mv dist/*.* dist/dist/ && cp index.html dist/ && firebase deploy"
  
}

Now just run npm deploy, and the Firebase CLI will show you the URL of your hosted code!

When To Use This Architecture

This setup is perfect for an MVP. By the third time you’ve done this, you’ll have a working web app in minutes — backed by a scalable database that is hosted for free. You can immediately start building features.

Also, there’s a lot of space to grow. If Cloud Functions aren’t powerful enough, you can fall back to a traditional API running on docker in Google Cloud for instance. Also, you can upgrade your Vue.js architecture with vue-router and vuex, and use the power of webpack that’s included in the vue-cli template.

It’s not all rainbows and unicorns, though. The most notorious caveat is the fact that your clients are immediately talking to your database. There’s no middleware layer that you can use to transform the raw data into a format that’s easier for the client. So, you have to store it in a client-friendly way. Whenever your clients request change, you’re going to find it pretty difficult to run data migrations on Firebase. For that, you’ll need to write a custom Firestore client that reads every record, transforms it, and writes it back.

Take time to decide on your data model. If you need to change your data model later on, data migration is your only option.

So what are examples of projects using these tools? Amongst the big names that use Vue.js are Laravel, GitLab and (for the Dutch) nu.nl. Firestore is still in beta, so not a lot of active users there yet, but the Firebase suite is already being used by National Public Radio, Shazam, and others. I’ve seen colleagues implement Firebase for the Unity-based game Road Warriors that was downloaded over a million times in the first five days. It can take quite some load, and it’s very versatile with clients for web, native mobile, Unity, and so on.

Where Do I Sign Up?!

If you want to learn more, consider the following resources:

Happy coding!

Smashing Editorial
(da, ra, hj, il)


Link: 

Hit The Ground Running With Vue.js And Firestore

Why Is Google Analytics Inaccurate?

You may have noticed that some of your Google Analytics data isn’t entirely accurate. Whether you saw a sudden, unwarranted change in user behavior, picked up on major differences after a redesign, or found some unexplainable information within a report, there are many things that can indicate issues with your data. And that’s completely normal. Google Analytics is one of the most popular (if not the most popular) platforms for monitoring site performance. It can provide tons of valuable insight and is considered by many SEOs and site owners to be an indispensable tool. But it isn’t perfect. In fact,…

The post Why Is Google Analytics Inaccurate? appeared first on The Daily Egg.

Visit link – 

Why Is Google Analytics Inaccurate?