Published articles Tiny Tiny RSS/1.10 2017-09-14T21:32:59+00:00 Breakthrough

A decade or so ago, a young musician couldn’t get anyone to play his music. He had raw talent, and just recorded his first album, but all the gatekeepers thought he sounded too young. Without Disney or Nickelodeon marketing his stuff, he was a dud.

What does he do?

I bet you know the names of a few famous impressionist painters. Monet. Manet. Degas. What makes them famous though? Are they really the best? Do you know a bad impressionist painter?

What about Gustave Caillebotte?

Caillebotte was an interesting impressionist. I don’t think anyone would say he’s bad, but he sure isn’t as popular as Monet.

Caillebotte also has a quirky story. Upon his death he requested his art collection be hung in the Musée du Luxembourg in Paris. His art collection was about 70 paintings he had collected from his friends, also impressionists.

They weren’t popular. They were actually the worst paintings of his friends. “Worst” being the ones his friends couldn’t get anyone else to buy. And at the time, people didn’t even like impressionism. Many hated it.

So Caillebotte’s request in his will for the government to take his friends paintings and hang them in a museum was insane. How can someone force a museum to hang a bunch of paintings that no one liked or is even familiar with just because it’s a dead person’s request? It resulted in fierce criticism from the art world and public scrutiny.

But Renoir finally convinced the museum to hang half of the collection 3 years after Caillebotte’s death. When the collection opened to the public, the museum was packed. Everyone wanted to see these paintings because they had generated so much scandal.

Today, impressionism is mostly known for the work of the 7 greatest impressionist painters: Manet, Monet, Cézanne, Degas, Renoir, Pissarro, and Sisley.

The 7 friends in Caillebotte’s collection.

Sure Caillebotte had an eye for talent, and a belief impressionism would be admired at some point in the future.

But what really happened is that the inadvertent exposure that Caillebotte brought to his friends also made people like them more.

At least that’s the argument Derek Thompson makes in his book Hit Makers. Derek mentions James Cutting, a professor of psychology at Cornell University, and Cutting’s work to show how exposure begets likability.

In Cutting’s experiments he had people compare famous paintings to more obscure works. Cutting proved the obvious — people prefered paintings from painters who are famous 6 out of 10 times.

But when Cutting came up with an experiment to expose people to those obscure paintings 4 times more frequently than the famous paintings, people’s preferences switched. Now people preferred the more obscure paintings 8 out of 10 times.

We don’t judge things just based on quality. Exposure changes our mind. The more we see, hear, or read something, the more we like it.

That young musician had promise. But he needed to break through somehow. His manager came up with a plan. They were going to get in a van and travel around the country visiting every radio station he could. The kid is charming and has some talent, so it wasn’t as hard to schedule single visits to play an acoustic track from his record live on air.

And this kid performed that track a lot. Eventually the exposure of playing the same song over and over again propelled “One Time” to the top of the charts and this musician is now a household name. This musician’s manager said:

There’s not a DJ that can say they haven’t met Justin Bieber.

There’s a lot to unpack from Justin’s rise to the sensation he is today. Not the least of which was the grit of a 14 year old kid who wouldn’t take no for an answer. Or the unwavering optimism he had of putting himself out there on YouTube uploading crappy videos of himself performing.

But one of the most interesting aspects of Justin’s story is that to get through his obstacle, he went out and generated exposure to his work even if it wasn’t the exposure that he originally intended. He thought he could cut a record and get a ton of people listening to it. Instead he had to take the little wins and build from there.

Most of us aren’t going to be the next Justin Beiber, but it’s still a lesson for us to go figure out how to get more exposure even if it isn’t the big splash we imagine we’re capable of.

Want to be a headline speaker, go do talks at all the tiny chambers of commerce in front of 8 people for awhile. Want to get a byline in a famous publication, do hundreds of guest blog posts for whoever will pick you up.

It’s a big reason I’ve generated the audience I have. I’m out there doing podcasts, daily vlog episodes, interviews, and writing articles in a ton of different places.

Sometimes the opportunity is small. I’ll be the person’s first interview they’ve ever done. Doesn’t matter. Sometimes the message feels repetitive. I’ll be asked about the same question I’ve answered a million times. Doesn’t matter.

I remind myself how often someone like a Justin Bieber played to just a handful of people at first or played the same single song over and over again without losing faith or enthusiasm. Or how Monet, no matter how talented he was, still needed the exposure, even if accidentally, a friend generated.

Because in this day and age, even people with good products, talented musicians or painters, we all need to be out there generating as much exposure as possible to break through the noise.

P.S. You should follow me on YouTube: where I share more about how we run our business, do product design, market ourselves, and just get through life.

And if you need a zero-learning-curve system to track leads and manage follow-ups you should try Highrise.

Breakthrough was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.

2017-09-23T12:16:10+00:00 Nathan Kontny Signal vs. Noise Casper went to war with a popular mattress reviews site — then financed its takeover

Sleepy in the sheets, slick in the streets.

In April 2016, the mattress startup Casper sued three popular mattress review sites, claiming they drove business to Casper competitors without proper disclosure that these mattress brands paid sales commissions to the sites.

The schemes, in Casper’s view, amounted to false advertising and deceptive practices because the sites promoted their reviews as unbiased but did not conspicuously disclose the relationships with the specific mattress makers they were recommending.

In the booming world of online mattress sales, these reviews sites had accumulated massive power — often turning up high in Google search results for general queries like “mattress reviews” or brand-specific ones like “Casper reviews.” And without many showrooms to allow customers to try out these mattresses, online reviews carried even more weight.

As a result of that power, Casper at times engaged with the sites. In 2015, Casper CEO Philip Krim had a months-long email conversation with one of the sites’ founders, sounding like he was ready for Casper to play ball.

“Currently you actively endorse a competing product on our review page,” Krim wrote in one email, which was made public in court filings. “What can we do to not have you endorse another product as superior to ours?”

In an email to Recode last year, Krim characterized the conversation as an “effort to figure out how to urge these sites to stop steering away consumers specifically looking for Casper to the copycats,” whom Krim alleged “were paying larger affiliate fees and provided more lucrative compensation structures.”

Casper claimed the practices cost it millions of dollars in potential sales.

In the end, all three reviews companies settled with Casper. But a bizarre thing happened after the settlement with a popular site called Sleepopolis: Casper provided a loan to another mattress reviews company to acquire the site from its previous owners.

A Casper spokesperson says Sleepopolis is run independently of Casper — meaning both as a business and as an editorial entity. It is now owned by a company called JAKK Media that specializes in search engine optimization and operates other reviews sites like and A disclosure appears on many of Sleepopolis’ pages.

But the relationship has not been lost on Casper’s competitors or competing review sites, who have been gossiping about it since it was announced earlier this summer.

They wonder what happens if the operators of Sleepopolis default on the Casper loan, giving the mattress company control of the site. Perhaps more importantly, they question why a company of Casper’s stature in the industry — the startup is believed to be the biggest of the so-called “bed-in-a-box” startups, recently raising a $170 million investment led by Target and with its products in Target stores — would risk the perception of impropriety. I have yet to get an answer.

A review on Sleepopolis from 2016 called Casper “an above average ... mattress, but it’s not above average enough. There are simply too many other mattresses available that I find offer better support, comfort, and feel for about the same price (some even less).”

The review linked to four other mattress brands that the previous Sleepopolis owner recommended at the time over Casper. His site appeared to have had commission relationships with at least three of them at the time, but not Casper.

That review appears to be gone. In its place, there’s a new detailed Casper review on Sleepopolis that is marked as being updated this September. A link to it appears on the first page of Google search results for “Casper reviews.”

The review ends on this note:

“Overall my experience with Casper was very positive – the comfort of the mattress definitely stands out from the pack in my mind. With their generous sleep-trial, if the Casper mattress intrigues you, I say go for it!”

The writer then provides a link to Casper’s website — along with a discount code.

2017-09-23T20:39:47+00:00 Jason Del Rey Recode - Front Page Highrise about town

Recent places Highrise has been spotted in the wild

Photo by Christine Roy on Unsplash

Conference organizing

Congratulations to the Girls to the Moon team for another successful Campference providing a safe space for girls to keep kicking ass! Alison directs operations for the group and we know how tough it is to keep those pieces together. She uses Highrise to help.

Girls To The Moon Sponsors underwrite our programming

Job interviews are ineffective

Since starting the Highrise team from scratch when we spun off from Basecamp in 2014, we’ve learned a thing or two about hiring. A big one being how terrible interviews are for finding successful fits.

What we do is find a few top candidates and we pay them for a one week mini project and see what they come back with. It’s not cheap, but it’s worse to hire someone who doesn’t work out.

Many Hirers Turn to Alternatives to Job Interviews

Too many marketing options

Overwhelmed by all the options to market yourself? Here’s 8 tips on dealing with it. Number 5? Use Highrise :)

Many of my clients swear by to manage their contacts and follow-ups.

8 Tips to Get Over Marketing Overwhelm

Starting your own consulting business

If you want to start your own consulting business, a ton of great advice here including using Highrise to help with the organization:

Highrise adds structure and organization so teams can focus on creating, running, and growing their business rather than trying to understand who said what when and to whom and letting business fall through the cracks.

The Ultimate Guide: How to start your own consulting business overseas | Nomad Capitalist

Being original

A recent vlog episode of mine reminding people how important it is to not get stuck trying “to be original”. You can follow me on YouTube here:

Looking for a CRM?

And if you are in the market for a CRM, needing Highrise or something else, here are some things to keep in mind during your search.

7 Dead-Simple Tips for Effective CRM

I hope you enjoy the things we’ve been sharing. And I’m thrilled Highrise is finding a place in so many lives and business. If there’s anything you’d be interested in us covering, or if you’d like to interview any of us, we’d love to chat. Don’t hesitate to reach out (

Highrise about town was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.

2017-09-22T16:19:04+00:00 Nathan Kontny Signal vs. Noise Franchise: a sql notebook Comments ]]> 2017-09-22T15:02:44+00:00 DataTau 10 Things that DO NOT (Directly) Affect Your Google Rankings - Whiteboard Friday

Posted by randfish

What do the age of your site, your headline H1/H2 preference, bounce rate, and shared hosting all have in common? You might've gotten a hint from the title: not a single one of them directly affects your Google rankings. In this rather comforting Whiteboard Friday, Rand lists out ten factors commonly thought to influence your rankings that Google simply doesn't care about.

10 Things that do not affect your Google rankings

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about things that do not affect your Google rankings.

So it turns out lots of people have this idea that anything and everything that you do with your website or on the web could have an impact. Well, some things have an indirect impact and maybe even a few of these do. I'll talk through those. But tons and tons of things that you do don't directly affect your Google rankings. So I'll try and walk through some of these that I've heard or seen questions about, especially in the recent past.

1. The age of your website.

First one, longstanding debate: the age of your website. Does Google care if you registered your site in 1998 or 2008 or 2016? No, they don't care at all. They only care the degree to which your content actually helps people and that you have links and authority signals and those kinds of things. Granted, it is true there's correlation going in this direction. If you started a site in 1998 and it's still going strong today, chances are good that you've built up lots of links and authority and equity and all these kinds of signals that Google does care about.

But maybe you've just had a very successful first two years, and you only registered your site in 2015, and you've built up all those same signals. Google is actually probably going to reward that site even more, because it's built up the same authority and influence in a very small period of time versus a much longer one.

2. Whether you do or don't use Google apps and services.

So people worry that, "Oh, wait a minute. Can't Google sort of monitor what's going on with my Google Analytics account and see all my data there and AdSense? What if they can look inside Gmail or Google Docs?"

Google, first off, the engineers who work on these products and the engineers who work on search, most of them would quit right that day if they discovered that Google was peering into your Gmail account to discover that you had been buying shady links or that you didn't look as authoritative as you really were on the web or these kinds of things. So don't fear the use of these or the decision not to use them will hurt or harm your rankings in Google web search in any way. It won't.

3. Likes, shares, plus-ones, tweet counts of your web pages.

So you have a Facebook counter on there, and it shows that you have 17,000 shares on that page. Wow, that's a lot of shares. Does Google care? No, they don't care at all. In fact, they're not even looking at that or using it. But what if it turns out that many of those people who shared it on Facebook also did other activities that resulted in lots of browser activity and search activity, click-through activity, increased branding, lower pogo-sticking rates, brand preference for you in the search results, and links? Well, Google does care about a lot of those things. So indirectly, this can have an impact. Directly, no. Should you buy 10,000 Facebook shares? No, you should not.

4. What about raw bounce rate or time on site?

Well, this is sort of an interesting one. Let's say you have a time on site of two minutes, and you look at your industry averages, your benchmarks, maybe via Google Analytics if you've opted in to sharing there, and you see that your industry benchmarks are actually lower than average. Is that going to hurt you in Google web search? Not necessarily. It could be the case that those visitors are coming from elsewhere. It could be the case that you are actually serving up a faster-loading site and you're getting people to the information that they need more quickly, and so their time on site is slightly lower or maybe even their bounce rate is higher.

But so long as pogo-sticking type of activity, people bouncing back to the search results and choosing a different result because you didn't actually answer their query, so long as that remains fine, you're not in trouble here. So raw bounce rate, raw time on site, I wouldn't worry too much about that.

5. The tech under your site's hood.

Are you using certain JavaScript libraries like Node or React, one is Facebook, one is Google. If you use Facebook's, does Google give you a hard time about it? No. Facebook might, due to patent issues, but anyway we won't worry about that. .NET or what if you're coding up things in raw HTML still? Just fine. It doesn't matter. If Google can crawl each of these URLs and see the unique content on there and the content that Google sees and the content visitors see is the same, they don't care what's being used under the hood to deliver that to the browser.

6. Having or not having a knowledge panel on the right-hand side of the search results.

Sometimes you get that knowledge panel, and it shows around the web and some information sometimes from Wikipedia. What about site links, where you search for your brand name and you get branded site links? The first few sets of results are all from your own website, and they're sort of indented. Does that impact your rankings? No, it does not. It doesn't impact your rankings for any other search query anyway.

It could be that showing up here and it probably is that showing up here means you're going to get a lot more of these clicks, a higher share of those clicks, and it's a good thing. But does this impact your rankings for some other totally unbranded query to your site? No, it doesn't at all. I wouldn't stress too much. Over time, sites tend to build up site links and knowledge panels as their brands become bigger and as they become better known and as they get more coverage around the web and online and offline. So this is not something to stress about.

7. What about using shared hosting or some of the inexpensive hosting options out there?

Well, directly, this is not going to affect you unless it hurts load speed or up time. If it doesn't hurt either of those things and they're just as good as they were before or as they would be if you were paying more or using solo hosting, you're just fine. Don't worry about it.

8. Use of defaults that Google already assumes.

So when Google crawls a site, when they come to a site, if you don't have a robots.txt file, or you have a robots.txt file but it doesn't include any exclusions, any disallows, or they reach a page and it has no meta robots tag, they're just going to assume that they get to crawl everything and that they should follow all the links.

Using things like the meta robots "index, follow" or using, on an individual link, a rel=follow inside the href tag, or in your robots.txt file specifying that Google can crawl everything, doesn't boost anything. They just assume all those things by default. Using them in these places, saying yes, you can do the default thing, doesn't give you any special benefit. It doesn't hurt you, but it gives you no benefit. Google just doesn't care.

9. Characters that you use as separators in your title element.

So the page title element sits in the header of a document, and it could be something like your brand name and then a separator and some words and phrases after it, or the other way around, words and phrases, separator, the brand name. Does it matter if that separator is the pipe bar or a hyphen or a colon or any other special character that you would like to use? No, Google does not care. You don't need to worry about it. This is a personal preference issue.

Now, maybe you've found that one of these characters has a slightly better click-through rate and preference than another one. If you've found that, great. We have not seen one broadly on the web. Some people will say they particularly like the pipe over the hyphen. I don't think it matters too much. I think it's up to you.

10. What about using headlines and the H1, H2, H3 tags?

Well, I've heard this said: If you put your headline inside an H2 rather than an H1, Google will consider it a little less important. No, that is definitely not true. In fact, I'm not even sure the degree to which Google cares at all whether you use H1s or H2s or H3s, or whether they just look at the content and they say, "Well, this one is big and at the top and bold. That must be the headline, and that's how we're going to treat it. This one is lower down and smaller. We're going to say that's probably a sub-header."

Whether you use an H5 or an H2 or an H3, that is your CSS on your site and up to you and your designers. It is still best practices in HTML to make sure that the headline, the biggest one is the H1. I would do that for design purposes and for having nice clean HTML and CSS, but I wouldn't stress about it from Google's perspective. If your designers tell you, "Hey, we can't get that headline in H1. We've got to use the H2 because of how our style sheets are formatted." Fine. No big deal. Don't stress.

Normally on Whiteboard Friday, we would end right here. But today, I'd like to ask. These 10 are only the tip of the iceberg. So if you have others that you've seen people say, "Oh, wait a minute, is this a Google ranking factor?" and you think to yourself, "Ah, jeez, no, that's not a ranking factor," go ahead and leave them in the comments. We'd love to see them there and chat through and list all the different non-Google ranking factors.

Thanks, everyone. See you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

2017-09-22T00:05:00+00:00 randfish The Moz Blog Thermal structure of Hurricane Maria

Hurricane Maria touched down in Puerto Rico. This visualization by Joshua Stevens at NASA shows what the thermal structure of the storm looked like, based on data collected by the Terra satellite.

Colder clouds, which are generally higher in the atmosphere, are shown with white. Somewhat warmer, lower clouds appear purple. The image reveals a very well-defined eye surrounded by high clouds on all sides—an indication that the storm was very intense.


Tags: , ,

2017-09-22T03:35:58+00:00 Nathan Yau FlowingData Your fast car

Right there, in your driveway, is a really fast car. And here are the keys. Now, go drive it.

Right there, in your hand, is a Chicago Pneumatics 0651 hammer. You can drive a nail through just about anything with it, again and again if you choose. Time to use it.

And here's a keyboard, connected to the entire world. Here's a publishing platform you can use to interact with just about anyone, just about any time, for free. You wanted a level playing field, one where you have just as good a shot as anyone else? Here it is. Do the work.

That's what we're all counting on.

For you to do the work. 

2017-09-22T08:08:00+00:00 Seth Godin Seth's Blog Is Uber's lawsuit against an agency a harbinger of greater brand-agency discord?

Thanks to all of the scandals and controversies that have hit online advertising in the past year, a growing number of companies are taking a closer look at their digital ad campaigns.

Some of them aren't liking what they're finding.


2017-09-22T08:39:51+00:00 Patricio Robles Posts from the Econsultancy blog How to Use Regular Expressions in Google Analytics like a Pro

Regular expressions are widely used in different spheres helping to solve some complex questions related to data analysis. If you are using Google Analytics, you know how much data it can give you, and you need something that will help you to deal with it. Regular expressions, or RegEx, are a good match here. In the article, I’ll explain how to use them to create pro reports and filters.

Some theory

Before creating and using regular expressions, you should speak their language. So here are some most important “words” that will help you to bridge that gap.

. – this symbol matches any single character. Therefore, two such signs match two symbols, etc.

hou.e –> house, hou7e,
hou)e; ho..e – house, ho5re, ho%be

* – equals 0 or more of the previous characters.


magent* -  magentn, magent, magenta, magentm

| - the tube is used to separate parts of the regular expression from each other

hou.e| magent* - house, magento

^ - requires the matching data be at the beginning of the field

^color – matches color21, colorkjir, color-23-67, etc. but does NOT match anything like 9color, ycolor, icolor, etc.

$ - limits data search at the end of the field

htm$ - matches any-url.htm but NOT any-url.html

() – contains variations of matching items. It is usually combined with the tube.

dear (Mr|Mrs|Ms|Miss) Smith – matches dear Mr Smith, dear Mrs Smith, dear Ms Smith, dear Miss Smith.

Parenthesis are also usually combined with the asterisk * and dot . to create this part (.*) which is treated as “absolutely everything”:

/map/country/(.*) - /map/country/usa, /map/country/usa15, /map/country/belarus/, /map/country/uk/

\ – transforms any special RegEx character into a simple symbol

my-url-(promo|campaign|ref)\.html – matches my-url-promo.html, my-url- campaign.html, my-url-ref.html

Now when the theory is over, let’s come and play with Google Analytics filters.


RegEx for Google Analytics filters

The main reason to use regular expressions in Google Analytics is to filter out the data you need to explore. For example, you have 5000+ URLs and need stats for just 250 of them. If you try to get the stats for these URLs one by one, you’ll get bored soon and have to spend lots (I really mean LOTS here) of time. Instead, you can use regular expression to get the stats in a few clicks.

When it comes to Google Analytics filters, you can create them in the existing reports:

use regex in reporting

or in custom reports that are created by you:

regex custom reports

The logic behind building a regular expression is to make a list of the needed URLs and find common parts in them. Here are some examples.

URLs in one category

Example 1

You need all URLs in a single category:

But not these:

In these examples you see that /magento-extensions/ is the common part, so you should use it to create your RegEx:


^ - excludes any URLs containing anything except for the needed category after host, i.e. “” in this example.

Example 2

You need particular URLs within one category:

But not these:

You can use the following regular expression:


URLs from different categories

Example 1

Here are some other examples. If you need to include these URLs:

Here is what you should use:


Example 2

To get info on these URLs:

you can use this:



Excluding parameters from Google Analytics reports

You will find numerous URLs with parameters like ?, =, etc. They can be generated automatically by your store navigation or just can occur and you can’t control it. To exclude such params from your reports you can use this regular expression:


Note that both inclusion and exclusion filters can be used in one report which is quite handy.


IP addresses exclusion

When a team works on a site, each member open the pages many times a day. This will result in inaccurate data in Google Analytics. The thing is you need to see actions of real users and customers but not team members. You can exclude internal traffic with an IP exclusion filter on a view level.

Make a list of all the internal IPs and create a regular expression from them. For example:


Here is what you can use:


Ranges can also be used here but I prefer using more simple but understandable structure. It’s also easy to add or delete any IP address from the list if you need to.

You can add your IP filter in Admin > Filters > New Filter > Custom

exclude multiple ips

Important things to remember

  • You should know that view-level filters like IP exclusion (those that change the way your data is collected) cannot be undone: if you’ve mistakenly excluded to many IPs and lost Analytics data for this exclusion period, you won’t get it back by removing the exclusion filter. That’s why you should always have a test GA view to apply different filters here.

  • Search filters in reports are safe: you can use them in any Google Analytics view. You can also create and save custom reports with them.

  • Your regular expressions should not contain any blanks as any part after a blank is ignored.

  • If you need to create a super big regular expression, create it part by part. You can test each part to make sure it works and then combine them using the tube |. This way you won’t need to review huge regular expression in case something goes wrong with it.

  • Testing is always the answer. This will help you in creating a particular RegEx for your site.


2017-09-22T10:53:30+00:00 Kristina Azarenko Blog - MarketLytics London Says It Won't Reissue Uber's License

London’s top transport authority stripped Uber of its private-car hire license in the city, threatening to shut the company out of one of its biggest markets.

2017-09-23T01:09:04+00:00 Technology Using Google Sheets as a Script Controller

Learn how to use Google Sheets to make AdWords script adjustments via spreadsheet rather than through the code itself.


2017-09-21T11:45:56+00:00 Jacob Fairclough The Adventures of PPC Hero A Web Dev’s checklist for maintaining page speed A guide for web developers to maintain a faster site with better performance

It’s obvious to say that all websites need upkeep, but often times they are left as “good enough.” Collecting dust, attracting hackers, slipping in the rankings. We’ve already gone into painstaking detail about why page speed matters and various levels of guidance for site speed optimization overhaul from novice to advanced. This article assumes you’ve implemented most of those major improvements, at least somewhat recently. If you haven’t, then definitely do start here.

In this post, we’ll look at the most likely culprits causing page speed to creep back up, and basic fixes to help keep the base layer of the marketing stack (infrastructure, page/site speed) healthy.

Infrastructure and site speed are the foundation for good digital marketing

Infrastructure drives internet marketing. It's the base of the stack.

Image compression

Over time, lots of content gets added, edited, and generally messed with on your site. Content producers upload images, page templates get a facelift, and so on. In the hustle to keep our content fresh, compelling, and published on-time, marketers can easily forget to follow the image compression procedures that we carefully laid out the last time we did a site speed push.

Whatever the reason for the average image size creeping up, if images are not optimized before they go live, or auto-optimized on upload, it won’t take long for your newest content to under-perform.

Maintenance Tip #1: Test a handful of your main pages quarterly like the homepage, blog hub, services page, products hub, etc., on Analyze the results and make sure you address image compression issues.

Identify where and how over-sized images are popping up most frequently, and proactively address it with the folks who can improve their process. Teach your content team how they can optimize imagery for your site. Better still, implement a plugin that will auto-optimize your imagery on upload.

HTACCESS rules overwritten

This may sound like a strange culprit, but it definitely happens. A CMS and/or plugin modifies the local Apache config file (.htaccess) and in the process, wipes out a bunch of browser caching or compression directives.

Maintenance Tip #2: Be sure that you’re still defining max-age (or far off) expiration times for your static files like images, CSS, JavaScript, fonts, etc.

JavaScript and CSS bloat

Have you added any plugins to extend functionality lately? More often than not, plugins inject their own javascript and CSS on every page, even if they’re only truly being utilized on a few.

Maintenance Tip #3: Customize your theme (or app) to remove these unnecessary resources where they are not needed. This can be tedious, but taking the time to limit all plug-ins to only the uses and pages where they’re needed will add up to improve your performance metrics, and enhance your customers’ experience on your site.

3rd party scripts

Marketers love their tools. They provide great insights into all kinds of metrics to help make informed decisions. From page views and click-throughs to heat maps and scroll tracking. But sometimes they love their tools too much — and subsequently have their dev team install a bunch of 3rd party scripts into the code.

With the advent of Google Tag Manager, marketers can even insert some of these scripts themselves, which opens its own can of worms that we won’t get into. We’re big fans of GTM in general for the agility and visibility it brings to marketing teams, but as a developer that knows the value of site speed, you still need to understand all the code that’s going into your site.

Maintenance Tip #4: Coordinate with your analytics and/or marketing teams who utilize these tools and analyze them carefully. Be picky about what you are running on your site. Purge scripts that are no longer needed (or no longer work).

CMS and Updates to Plugins, Extensions, Modules

Regardless of your CMS, you should be keeping up with core updates for security patching and enhanced functionality. Sometimes, the updates are improvements to code efficiency. Plus, you don’t want to fall so far behind that when you finally address updates, they are of the large, complicated, headache variety. Iterative updates for the win!

The same goes for any plugins or extensions.

Maintenance Tip #5: Be vigilant about CMS updates. Keep ‘em small and painless. If a faster site wasn’t enough incentive, you don’t want to be unknowingly advertising for hackers, right?

That’s all for now!

This should be a fairly simple, manageable list to get your site’s performance back on the right track — and, most importantly, convert more leads. I would suggest reviewing once a month, and if you haven’t made any investment in site speed before this definitely check out Portent’s Massive Guide To Site Speed as a starting place If you have any questions, please let me know in the comments below.

The post A Web Dev’s checklist for maintaining page speed appeared first on Portent.

2017-09-21T20:56:34+00:00 Andy Schaff Conversation Marketing: Internet Marketing with a Twist of Lemon Connecting Marketers to Machine Learning: A Traveler’s Guide Through Two Utterly Dissimilar Worlds

Artificial Intelligence for Marketing by Jim Sterne

There are people in the world who work with and understand AI and machine learning. And there are people in the world who work with and understand marketing. The intersection of those two groups is a vanishingly tiny population.

Until recently the fact of that nearly empty set didn’t much matter. But with the dramatic growth in machine learning penetration into key marketing activities, that’s changed. If you don’t understand enough about these technologies to use them effectively…well…chances are some of your competitors do.

AI for Marketing, Jim Sterne’s new book,  is targeted specifically toward widening that narrow intersection of two populations into something more like a broad union. It’s not an introduction to machine learning for the data scientist or technologist (though there’s certainly a use and a need for that). It’s not an introduction to marketing (though it does an absolutely admirable job introducing practical marketing concepts). It’s a primer on how to move between those two worlds.

Interestingly, in AI for Marketing, that isn’t a one way street. I probably would have written this book on the assumption that the core task was to get marketing folks to understand machine learning. But AI for Marketing makes the not unreasonable assumption that as challenged as marketing folks are when it comes to AI, machine learning folks are often every bit as ignorant when it comes to marketing. Of course, that first audience is much larger – there’s probably 1000 marketing folks for every machine learner. But if you are an enterprise wanting two teams to collaborate or a technology company wanting to fuse your machine learning smarts to marketing problems, it makes sense to treat this as a two-way street.

Here’s how the book lays out.

Chapter 1 just sets the table on AI and machine learning. It’s a big chapter and it’s a bit of grab bag, with everything from why you should be worried about AI to where you might look for data to feed it. It’s a sweeping introduction to an admittedly huge topic, but it doesn’t do a lot of real work in the broader organization of the book.

That real work starts in Chapter 2 with the introduction to machine learning. This chapter is essential for Marketers. It covers a range of analytic concepts: an excellent introduction into the basics of how to think about models (a surprisingly important and misunderstood topic), a host of common analytics problems (like high cardinality) and then introduces core techniques in machine learning. If you’ve ever sat through data scientists or technology vendors babbling on about support vector machines and random forests, and wondered if you’d been airlifted into an incredibly confusing episode of Game of Drones, this chapter will be a godsend. The explanations are given in the author’s trademark style: simple, straightforward and surprisingly enjoyable given the subject matter. You just won’t find a better more straightforward introduction to these methods for the interested but not enthralled businessperson.

In Chapter 3, Jim walks the other way down the street – introducing modern marketing to the data scientist. After a long career explaining analytics to business and marketing folks, Jim has absorbed an immense amount of marketing knowledge. He has this stuff down cold and he’s every bit as good (maybe even better) taking marketing concepts back to analysts as he is working in the other direction.  From a basic intro into the evolution of modern marketing to a survey of the key problems folks are always trying to solve (attribution, mix, lifetime value, and personalization), this chapter nails it. If you subscribe to the theory (and I do) that any book on Marketing could more appropriately have been delivered as a single chapter, then just think of this as the rare book on Marketing delivered at the right length.

If you accept the idea that bridging these two worlds needs movement in both directions, the structure to this point is obvious. Introduce one. Introduce the other. But then what?

Here’s where I think the structure of the book really sings. To me, the heart of the book is in Chapters 4, 5 and 6 (which I know sounds like an old Elvis Costello song). Each chapter tackles one part of the marketing funnel and shows how AI and machine learning can be used to solve problems.

Chapter 4 looks at up-funnel activities around market research, public relations, social awareness, and mass advertising. Chapter 5 walks through persuasion and selling including the in-store journey (yeah!), shopping assistants, UX, and remarketing. Chapter 6 covers (you should be able to guess) issues around retention and churn including customer service and returns. Chapter 7 is a kind of “one ring to rule them all”, covering the emergence of integrated, “intelligent” marketing platforms that do everything. Well….maybe. Call me skeptical on this front.

Anyway, these chapters are similar in tone and rich in content. You get the core issues explained, a discussion of how AI and machine learning can be used, and brief introductions into the vendors and people who are doing the work. For the marketer, that means you can find the problems that concern you, get a sense of where the state of machine learning stands vis-à-vis your actual problem set, and almost certainly pick-up a couple of ideas about who to talk to and what to think about next.

If you’re into this stuff at all, these four chapters will probably get you pretty excited about the possibilities. So think of Chapter 8 as a cautionary shot across the bow. From being too good for your own good to issues around privacy, hidden biases and, repeat after me, “correlation is not causation” this is Pandora’s little chapter of analytics and machine learning troubles.

So what’s left? Think about having a baby. The first part is exciting and fun. The next part is long and tedious. And labor – the last part – is incredibly painful. It’s pretty much the same when it comes to analytics. Operationalizing analytics is that last, painful step. It comes at the end of the process and nobody thinks it’s any fun. Like the introduction to marketing, the section on operationalizing AI bears all the hallmarks of long, deep familiarity with the issues and opportunities in enterprise adoption of analytics and technology. There’s tons of good, sound advice that can help you actually get some of this stuff done.

Jim wraps up with the seemingly obligatory look into the future. Now, I’m pretty confident that none of us have the faintest idea how the future of AI is going to unfold. And if I really had to choose, I guess I prefer my crystal ball to be in science fiction form where I don’t have to take anything but the plot too seriously. But there’s probably a clause in every publisher’s AI book contract that an author must speculate on the how wonderful/dangerous the future will be. Jim keeps it short, light, and highly speculative. Mission accomplished.


Summing Up

I think of AI for Marketing as a handy guidebook into two very different, neighboring lands. For most of us, the gap between the two is an un-navigable chasm. AI for Marketing takes you into each locale and introduces you to the things you really must know about them. It’s a fine introduction not just into AI and Machine Learning but into modern marketing practice as well. Best of all, it guides you across the narrow bridges that connect the two and makes it easier to navigate for yourself.  You couldn’t ask for a wiser, more entertaining guide to walk you around and over that bridge between two utterly dissimilar worlds that grow everyday more necessarily connected.


Full Disclosure: I know and like the author – Jim Sterne – of AI for Marketing. Indeed, with Jim the verbs know and like are largely synonymous. Nor will I pretend that this doesn’t impact my thoughts on the work. When you can almost hear someone’s voice as you read their words, it’s bound to impact your enjoyment and interpretation. So absolutely no claim to be unbiased here!


2017-09-21T22:58:23+00:00 garyangel Measuring the Digital World Apple HomeKit devices are suddenly booming

LIFX just announced HomeKit compatibility for its Wi-Fi smart lighting devices. Not just for new LIFX and LIFX+ lighting that you can buy from today on, but for existing LIFX products already in homes. It’s a trick that comes courtesy of a software update available now that makes existing LIFX products compatible with Apple's smart-home platform. But LIFX is just the latest in a series of companies to have made older products HomeKit compatible, thanks largely to Apple loosening the restrictions it had placed on its HomeKit partners.

In June, Apple announced software-based authentication for HomeKit. Prior to that, it required hardware-based authentication whereby every company making HomeKit products had to include an Apple-approved...

Continue reading…

2017-09-21T16:00:05+00:00 Thomas Ricker The Verge - All Posts How to Prioritize SEO Tasks [+Worksheet]

Posted by BritneyMuller

“Where should a company start [with SEO]?” asked an attendee after my AMA Conference talk.

As my mind spun into a million different directions and I struggled to form complete sentences, I asked for a more specific website example. A healthy discussion ensued after more direction was provided, but these “Where do I start?” questions occur all the time in digital marketing.

SEOs especially are in a constant state of overwhelmed-ness (is that a word?), but no one likes to talk about this. It’s not comfortable to discuss the thousands of errors that came back after a recent site crawl. It’s not fun to discuss the drop in organic traffic that you can’t explain. It’s not possible to stay on top of every single news update, international change, case study, tool, etc. It’s exhausting and without a strategic plan of attack, you’ll find yourself in the weeds.

I’ve performed strategic SEO now for both clients and in-house marketing teams, and the following five methods have played a critical role in keeping my head above water.

First, I had to source this question on Twitter:

How do you prioritize SEO fixes?
— Britney Muller (@BritneyMuller) September 15, 2017

Here was some of the best feedback from true industry leaders:

Screen Shot 2017-09-20 at 1.59.39 PM.png

Murat made a solid distinction between working with an SMBs versus a large companies:

Screen Shot 2017-09-20 at 2.03.26 PM.png

This is sad, but so true (thanks, Jeff!):

Screen Shot 2017-09-20 at 2.00.16 PM.png

To help you get started, I put together an SEO prioritization worksheet in Google Sheets. Make yourself a copy (File > Make a copy) and go wild!:

Free SEO prioritization workflow sheet


  1. Agree upon & set specific goals
  2. Identify important pages for conversions
  3. Perform a site crawl to uncover technical opportunities
  4. Employ Covey's time management grid
  5. Provide consistent benchmarks and reports

#1 Start with the end in mind

What is the end goal? You can have multiple goals (both macro and micro), but establishing a specific primary end goal is critical.

The only way to agree upon an end goal is to have a strong understanding of your client’s business. I’ve always relied on these new client questions to help me wrap my head around a new client’s business.

[Please leave a comment if you have other favorite client questions!]

This not only helps you become way more strategic in your efforts, but also shows that you care.

Fun fact: I used to use an alias to sign up for my client’s medical consultations online to see what the process was like. What automated emails did they send after someone made an appointment? What are people required to bring into a consult? What is a consult like? How does a consult make someone feel?

Clients were always disappointed when I arrived for the in-person consult, but happy that my team and I were doing our research!

Goal setting tips:


Seems obvious, but it’s essential to stay on track and set benchmarks along the way.

Be specific

Don’t let vague marketing jargon find its way into your goals. Be specific.

Share your goals

A study performed by Psychology professor Dr. Gail Matthews found that writing down and sharing your goals boosts your chances of achieving them.

Have a stretch goal

"Under-promise and over-deliver" is a great rule of thumb for clients, but setting private stretch goals (nearly impossible to achieve) can actually help you achieve more. Research found that when people set specific, challenging goals it led to higher performance 90% of the time.

#2 Identify important pages for conversions

There are a couple ways you can do this in Google Analytics.

Behavior Flow is a nice visualization for common page paths which deserve your attention, but it doesn’t display specific conversion paths very well.

Behavior flow google analytic report

It’s interesting to click on page destination goals to get a better idea of where people come into that page from and where they abandon it to:

behavior flow page path in google analytics

Reverse Goal Paths are a great way to discover which page funnels are the most successful for conversions and which could use a little more love:

Reverse goal path report in google analytics

If you want to know which pages have the most last-touch assists, create a Custom Report > Flat Table > Dimension: Goal Previous Step - 1 > Metric: Goal Completions > Save

Last touch page report in google analytics

Then you’ll see the raw data for your top last-touch pages:

Top pages report in Google Analytics

Side note: If the Marketing Services page is driving the second most assists, it’s a great idea to see where else on the site you can naturally weave in Marketing Services Page CTAs.

The idea here is to simply get an idea of which page funnels are working, which are not, and take these pages into high consideration when prioritizing SEO opportunities.

If you really want to become a conversion funnel ninja, check out this awesome Google Analytics Conversion Funnel Survival Guide by Kissmetrics.

#3 Crawl your site for issues

While many of us audit parts of a website by hand, we nearly all rely on a site crawl tool (or two) to uncover sneaky technical issues.

Some of my favorites:

I really like Moz Pro, DeepCrawl, and Raven for their automated re-crawling. I’m alerted anytime new issues arise (and they always do). Just last week, I got a Moz Pro email about these new pages that are now redirecting to a 4XX because we moved some Learning Center pages around and missed a few redirects (whoops!):

Screen Shot 2017-09-19 at 9.33.40 PM.png

An initial website crawl can be incredibly overwhelming and stressful. I get anxiety just thinking about a recent Moz site crawl: 54,995 pages with meta noindex, 60,995 pages without valid canonical, 41,234 without an <h1>... you get the idea. Ermahgerd!! Where do you start?!

This is where a time management grid comes in handy.

#4 Employ Covey's time management grid

Screen Shot 2017-09-15 at 12.04.15 PM.png

Time management and prioritization is hard, and many of us fall into "Urgent" traps.

Putting out small, urgent SEO fires might feel effective in the short term, but you’ll often fall into productivity-killing rabbit holes. Don’t neglect the non-urgent important items!

Prioritize and set time aside for those non-urgent yet important tasks, like writing short, helpful, unique, click-enticing title tags for all primary pages.

Here’s an example of some SEO issues that fall into each of the above 4 categories:

Screen Shot 2017-09-15 at 12.03.55 PM.png

To help prioritize Not Urgent/Important issues for maximum effectiveness here at Moz, I’m scheduling time to address high-volume crawl errors.’s largest issues (highlighted by Moz Pro) are meta noindex. However, most of these are intentional.

Screen Shot 2017-06-16 at 2.41.12 PM.png

You also want to consider prioritizing any issues on the primary page flows that we discovered earlier. You can also sort issues by shallow crawl depth (fewer clicks from homepage, which are often primary pages to focus on):

Screen Shot 2017-09-15 at 12.44.50 PM.png

#5 Reporting & communication

Consistently reporting your efforts on increasing your client’s bottom line is critical for client longevity.

Develop a custom SEO reporting system that’s aligned with your client’s KPIs for every stage of your campaign. A great place to start is with a basic Google Analytics Custom Report that you can customize further for your client:

While traffic, search visibility, engagement, conversions, etc. get all of the reporting love, don’t forget about the not-so-tangible metrics. Are customers less frustrated navigating the new website? How does the new site navigation make a user feel? This type of monitoring and reporting can also be done through kickass tools like Lucky Orange or Mechanical Turk.

Lastly, reporting is really about communication and understanding people. Most of you have probably had a client who prefers a simple summary paragraph of your report, and that’s ok too.

Hopefully these tips can help you work smarter, not harder.

Image result for biker becomes a rocket gif

Don’t miss your site’s top technical SEO opportunities:

Crawl your site with Moz Pro

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

2017-09-21T00:07:00+00:00 BritneyMuller The Moz Blog What You'll Be Told
Here's the ecosystem ... $100 is harvested from customers, $25 - $35 goes to vendors, and increasingly independent catalogers are becoming part of holding companies or private equity or Wall St. owned brands.

We're at a point where online brands have 5% - 20% ad-to-sales ratios ... while catalog brands have 20% - 40% ad-to-sales ratios. This difference ... about 20% of $100 generated by a customer, or $20, is flowing to the print industry instead of being repurposed for modernization.

If you go to your partners in the vendor community, vendors who benefit from the 20% tax you pay to put paper in the mail, you'll hear a common story:
  • "If you reduce pages or reduce contacts, you will reduce sales."
You'll be told there are rules of thumb, and those rules of thumb suggest that reducing pages or reducing contacts is a bad idea. And in many ways, the advice is accurate.
  • Sales will decrease when you get rid of pages or contacts.
  • Profit, however, is fungible.
The first thing you'll do is refer to your mail/holdout tests, and you'll quantify the fraction of sales that still happen if catalogs are not mailed. You run the tests, right? And on average, you don't believe the results of the tests, because the test results are inconsistent (because sample sizes are too small because your Executive Team didn't want to "lose sales"). Or, the results are consistent and tell you that half or more of the demand you generate has nothing to do with catalog marketing, and you don't like what the results tell you. Or the most common answer ... you don't execute the tests, because your vendor partners tell you that your matchbacks are fine (they're not - they horribly overstate results in a way that causes you to spend more money on paper/printing).

But if you are honest, you've run the tests, you've validated the findings and then you have partnered with your Finance Team to run a profit and loss statement ... with and without catalogs. Your results might look something like this (your mileage will vary).

You'll show your results to your vendor partners. Here's what you will be told.
  • You lose $50,000,000 if you stop mailing catalogs. You can't do that.
  • You make more profit by mailing catalogs. You can't stop mailing catalogs, or you will be less profitable.
And based on the facts of the table, your vendor partners are right. Your business will be cut in half. You will have to get rid of staff tied to strictly catalog marketing and that will be painful.

But notice that the difference in profit isn't that big.

What would you do if you had an additional twenty million dollars of cash laying around?

You wouldn't just do nothing, would you?

No, you'd do something!

You might execute free shipping 24/7/365 ... you might initiate a loyalty program and you might give your best customers perks previously unheard of, because you'd be able to because you'd have access to all this cash that you used to pay to paper/printer folks.

And you might not get rid of catalogs!

You might do the following ... 
  1. Reduce catalog marketing expense by 70%.
  2. Increase online marketing expense by 50%.
  3. Offer free shipping 24/7/365.
And if you did that, your profit and loss statement might look different.

You're mailing 30% of the catalogs you used to mail - you are doing more online marketing and low-cost / no-cost customer acquisition - and you are doing free shipping 24/7/365. And when you operate your business this way, coupled with the 50% of your business that is organic and not driven by catalog marketing, you quickly learn that you can be just as profitable. Smaller, but just as profitable.

Here's what you will be told.
  1. Sales decreased.
  2. Profit didn't change.
  3. You lost market share.
Now, I went through this back-in-the-day at Nordstrom. We ran the scenarios. We learned that the catalog business was essentially break-even, based on the scenarios we ran, based on how we'd reinvest our efforts.

Then we fully killed the catalog altogether ... and a funny thing happened.
  1. Sales didn't decrease ... online and store sales increased, in fact online sales increased faster than call center sales decreased, resulting in a net direct channel sales increase.
  2. Profit increased dramatically ... without $36,000,000 of paper out there, profit surged.
I worked with a company that tested all of their direct mail and catalog efforts ... and learned that 90% of their sales would happen anyway. The Executive Team did not believe the results, and more importantly, did not know what they would do if the results were true and they ran with the results.

I've also worked with companies that lose 80% of their sales if the catalog disappears. Cutting frequency or pages is catastrophic, and is a bad idea.

So the issue isn't whether you cut 0% of your pages, or 70% of your pages, or 100% of your pages.

The issue is this.
  • Are you willing to think independently - thinking outside of an ecosystem that takes 20% - 40% of your cash leaving you with 5%?
  • Are you willing to run scenarios, to test, to try different tactics and strategies? Are you willing to go where the scenarios take you?
  • Are you willing to speak up? Are you willing to speak with conviction? Are you willing to have an unpopular conversation with a vendor, even if the vendor rep yells at you? Are you willing to have an unpopular conversation with your CEO, even if your CEO yells at you?
I think you are willing to do all of those things!

And I think your vendor partners are willing to go on the ride with you, because I think your vendor partners are independent thinkers. They like strategy. They like confidence. They like being part of a solution. They like success!

The Structure of the Modern Catalog Industry inhibits creativity, and for good reason ... the ecosystem we belong to requires us to pass 20% of our cash along to the vendor community to pay the debt on the fancy printing devices we don't fully take advantage of. Imagine how frustrating it must be to be a printer who goes knee-deep into debt to help you perform better, only to hear that your big change in strategy is moving the November catalog one week later and adding four pages? They're saddled with debt because of our inaction ... we're passing 20% of cash to them to pay the debt. That's a severe tax burden on each side of the equations.

I think our industry ... all sides of our industry ... is better than this.

I'm betting on you. And our vendors. All of 'ya.

Let's do something!
2017-09-21T03:10:05+00:00 Kevin Hillstrom Kevin Hillstrom: MineThatData Looking for improbably frequent lottery winners

After hearing the story of reporter Lawrence Mower, who discovered fraudsters after a FOIA request in Florida, a group for the Columbia Journalism Review and PennLive looked to expand on the analysis.

Intrigued, we wanted to chart new territory: to find out whether these repeat winning patterns exist across the country. We decided to submit public records requests in every state with a lottery—an adventure in itself given that FOIA laws vary significantly by state. In all, we sent more than 100 public record requests to lotteries for information about their winners, game odds, and investigative reports. Getting those records wasn’t simple, as we outline below.

See more details.

This should be turned into a class project for Stat 101 courses.


2017-09-21T04:49:57+00:00 Nathan Yau FlowingData Stack Overflow salary calculator for developers

Stack Overflow used data from their developer survey to build a prediction model for salary, based on role, location, education, experience, and skills. The result was a salary calculator that you can use to gauge how much you should be making.

In this salary calculator, we report a predicted salary for the location, education, experience, and other information you enter. We also report a 50% prediction interval. The specific statistical meaning of this interval is that we expect 50% of people with the same characteristics as you to have salaries within that range; it spans the 25th to 75th percentiles. The interval is just as important as the prediction itself (the 50th percentile), because it gives you an understanding of what the range of expected salaries could be.


Tags: , ,

2017-09-21T08:06:05+00:00 Nathan Yau FlowingData In search of competition

The busiest Indian restaurants in New York City are all within a block or two of each other.

Books sell best in bookstores, surrounded by other books, their ostensible competitors.

And it's far easier to sell a technology solution if you're not the only one pioneering the category.

Competition is a signal. It means that you're offering something that's not crazy. Competition gives people reassurance. Competition makes it easier to get your point across. Competition helps us understand that people like us do things like this.

If you have no competition, time to find some.

2017-09-21T08:54:00+00:00 Seth Godin Seth's Blog Russian-Bought Ads on Facebook Spur Calls for Tighter Rules

The company is suddenly in the crosshairs of lawmakers pushing to crack down on exemptions that allow social-media companies to operate beyond the norms of political campaigns.

2017-09-21T13:33:37+00:00 Technology The Media Has A Probability Problem

This is the 11th and final article in a series that reviews news coverage of the 2016 general election, explores how Donald Trump won and why his chances were underrated by most of the American media.

Two Saturday nights ago, just as Hurricane Irma had begun its turn toward Florida, the Associated Press sent out a tweet proclaiming that the storm was headed toward St. Petersburg and not its sister city Tampa, just 17 miles to the northeast across Tampa Bay.

Hurricane forecasts have improved greatly over the past few decades, becoming about three times more accurate at predicting landfall locations. But this was a ridiculous, even dangerous tweet: The forecast was nowhere near precise enough to distinguish Tampa from St. Pete. For most of Irma’s existence, the entire Florida peninsula had been included in the National Hurricane Center’s “cone of uncertainty,” which covers two-thirds of possible landfall locations. The slightest change in conditions could have had the storm hitting Florida’s East Coast, its West Coast, or going right up the state’s spine. Moreover, Irma measured hundreds of miles across, so even areas that weren’t directly hit by the eye of the storm could have suffered substantial damage. By Saturday night, the cone of uncertainty had narrowed, but trying to distinguish between St. Petersburg and Tampa was like trying to predict whether 31st Street or 32nd Street would suffer more damage if a nuclear bomb went off in Manhattan.

To its credit, the AP deleted the tweet the next morning. But the episode was emblematic of some of the media’s worst habits when covering hurricanes — and other events that involve interpreting probabilistic forecasts. Before a storm hits, the media demands impossible precision from forecasters, ignoring the uncertainties in the forecast and overhyping certain scenarios (e.g., the storm hitting Miami) at the expense of other, almost-as-likely ones (e.g., the storm hitting Marco Island). Afterward, it casts aspersions on the forecasts unless they happened to exactly match the scenario the media hyped up the most.

Indeed, there’s a fairly widespread perception that meteorologists performed poorly with Irma, having overestimated the threat to some places and underestimated it elsewhere. Even President Trump chimed in to say the storm hadn’t been predicted well, tweeting that the devastation from Irma had been “far greater, at least in certain locations, than anyone thought.” In fact, the Irma forecasts were pretty darn good: Meteorologists correctly anticipated days in advance that the storm would take a sharp right turn at some point while passing by Cuba. The places where Irma made landfall — in the Caribbean and then in Florida — were consistently within the cone of uncertainty. The forecasts weren’t perfect: Irma’s eye wound up passing closer to Tampa than to St. Petersburg after all, for example. But they were about as good as advertised. And they undoubtedly saved a lot of lives by giving people time to evacuate in places like the Florida Keys.

The media keeps misinterpreting data — and then blaming the data

You won’t be surprised to learn that I see a lot of similarities between hurricane forecasting and election forecasting — and between the media’s coverage of Irma and its coverage of the 2016 campaign. In recent elections, the media has often overestimated the precision of polling, cherry-picked data and portrayed elections as sure things when that conclusion very much wasn’t supported by polls or other empirical evidence.

As I’ve documented throughout this series, polls and other data did not support the exceptionally high degree of confidence that news organizations such as The New York Times regularly expressed about Hillary Clinton’s chances. (We’ve been using the Times as our case study throughout this series, both because they’re such an important journalistic institution and because their 2016 coverage had so many problems.) On the contrary, the more carefully one looked at the polling, the more reason there was to think that Clinton might not close the deal. In contrast to President Obama, who overperformed in the Electoral College relative to the popular vote in 2012, Clinton’s coalition (which relied heavily on urban, college-educated voters) was poorly configured for the Electoral College. In contrast to 2012, when hardly any voters were undecided between Obama and Mitt Romney, about 14 percent of voters went into the final week of the 2016 campaign undecided about their vote or saying they planned to vote for a third-party candidate. And in contrast to 2012, when polls were exceptionally stable, they were fairly volatile in 2016, with several swings back and forth between Clinton and Trump — including the final major swing of the campaign (after former FBI Director James Comey’s letter to Congress), which favored Trump.

By Election Day, Clinton simply wasn’t all that much of a favorite; she had about a 70 percent chance of winning according to FiveThirtyEight’s forecast, as compared to 30 percent for Trump. Even a 2- or 3-point polling error in Trump’s favor — about as much as polls had missed on average, historically — would likely be enough to tip the Electoral College to him. While many things about the 2016 election were surprising, the fact that Trump narrowly won14 when polls had him narrowly trailing was an utterly routine and unremarkable occurrence. The outcome was well within the “cone of uncertainty,” so to speak.

So if the polls called for caution rather than confidence, why was the media so sure that Clinton would win? I’ve tried to address that question throughout this series of essays — which we’re finally concluding, much to my editor’s delight.15

Probably the most important problem with 2016 coverage was confirmation bias — coupled with what you might call good old-fashioned liberal media bias. Journalists just didn’t believe that someone like Trump could become president, running a populist and at times also nationalist, racist and misogynistic campaign in a country that had twice elected Obama and whose demographics supposedly favored Democrats. So they cherry-picked their way through the data to support their belief, ignoring evidence — such as Clinton’s poor standing in the Midwest — that didn’t fit the narrative.

But the media’s relatively poor grasp of probability and statistics also played a part: It led them to misinterpret polls and polling-based forecasts that could have served as a reality check against their overconfidence in Clinton.

How a probabilistic election forecast works — and how it can be easy to misinterpret

The idea behind an election forecast like FiveThirtyEight’s is to take polls (“Clinton is ahead by 3 points”) and transform them into probabilities (“She has a 70 percent chance of winning”). I’ve been designing and publishing forecasts like these for 15 years16 in two areas (politics and sports) that receive widespread public attention. And I’ve found there are basically two ways that things can go wrong.

First, there are errors of analysis. As an example, if you had a model of last year’s election that concluded that Clinton had a 95 or 99 percent chance of winning, you committed an analytical error.17 Models that expressed that much confidence in her chances had a host of technical flaws, such as ignoring the correlations in outcomes between states.18

But while statistical modeling may not always hit the mark, people’s subjective estimates of how polls translate into probabilities are usually even worse. Given a complex set of polling data — say, the Democrat is ahead by 3 points in Pennsylvania and Michigan, tied in Florida and North Carolina, and down by 2 points in Ohio — it’s far from obvious how to figure out the candidate’s chances of winning the Electoral College. Ad hoc attempts to do so can lead to problematic coverage like this article that appeared in the The New York Times last Oct. 31, three days after Comey had sent his letter to Congress:

Mrs. Clinton’s lead over Mr. Trump appears to have contracted modestly, but not enough to threaten her advantage over all or to make the electoral math less forbidding for Mr. Trump, Republicans and Democrats said. […]

The loss of a few percentage points from Mrs. Clinton’s lead, and perhaps a state or two from the battleground column, would deny Democrats a possible landslide and likely give her a decisive but not overpowering victory, much like the one President Obama earned in 2012. […]

You’ll read lots of clips like this during an election campaign, full of claims about the “electoral math,” and they often don’t hold up to scrutiny. In this case, the article’s assertion that the loss of “a few percentage points” wouldn’t hurt Clinton’s chances of victory was wrong, and not just in hindsight; instead, the Comey letter made Clinton much more vulnerable, roughly doubling Trump’s probability of winning.

But even if you get the modeling right, there’s another whole set of problems to think about: errors of interpretation and communication. These can run in several different directions. Consumers can misunderstand the forecasts, since probabilities are famously open to misinterpretation. But people making the forecasts can also do a poor job of communicating the uncertainties involved. For example, although weather forecasters are generally quite good at describing uncertainty, the cone of uncertainty is potentially problematic because viewers might not realize it represents only two-thirds of possible landfall locations.

Intermediaries — other people describing a forecast on your behalf — can also be a problem. Over the years, we’ve had many fights with well-meaning TV producers about how to represent FiveThirtyEight’s probabilistic forecasts on air. (We don’t want a state where the Democrat has only a 51 percent chance to win to be colored in solid blue on their map, for instance.) And critics of statistical forecasts can make communication harder by passing along their own misunderstandings to their readers. After the election, for instance, The New York Times’ media columnist bashed the newspaper’s Upshot model (which had estimated Clinton’s chances at 85 percent) and others like it for projecting “a relatively easy victory for Hillary Clinton with all the certainty of a calculus solution.” That’s pretty much exactly the wrong way to describe such a forecast, since a probabilistic forecast is an expression of uncertainty. If a model gives a candidate a 15 percent chance, you’d expect that candidate to win about one election in every six or seven tries. You wouldn’t expect the fundamental theorem of calculus to be wrong … ever.

I don’t think we should be forgiving of innumeracy like this when it comes from prominent, experienced journalists. But when it comes to the general public, that’s a different story — and there are plenty of things for FiveThirtyEight and other forecasters to think about in terms of our communication strategies. There are many potential avenues for confusion. People associate numbers with precision, so using numbers to express uncertainty in the form of probabilities might not be intuitive. (Listing a decimal place in our forecast, as FiveThirtyEight historically has done — e.g., 28.6 percent chance rather than 29 percent or 30 — probably doesn’t help in this regard.) Also, both probabilities and polls are usually listed as percentages, so people can confuse one for the other — they might mistake a forecast showing Clinton with a 70 percent chance of winning as meaning she has a 70-30 polling lead over Trump, which would put her on her way to a historic, 40-point blowout.19

What can also get lost is that election forecasts — like hurricane forecasts — represent a continuous range of outcomes, none of which is likely to be exactly right. The following diagram is an illustration that we’ve used before to show uncertainty in the FiveThirtyEight forecast. It’s a simplification — showing a distribution for the national popular vote only and which candidate wins the Electoral College.20 Still, the diagram demonstrates several important concepts for interpreting polls and forecasts:

  • First, as I mentioned, no exact outcome is all that likely. If you rounded the popular vote to the nearest whole number, the most likely outcome was Clinton winning by 4 percentage points. Nonetheless, the chance that she’d win by exactly 4 points21 was only about 10 percent. “Calling” every state correctly in the Electoral College is even harder. FiveThirtyEight’s model did it in 2012 — in a lucky break22 that may have given people a false impression about how easy it is to forecast elections — but we estimated that the chances of having a perfect forecast again in 2016 were only about 2 percent. Thus, properly measuring the uncertainty is at least as important a part of the forecast as plotting the single most likely course. You’re almost always going to get something “wrong” — so the question is whether you can distinguish the relatively more likely upsets from the relatively less likely ones.
  • Second, the distribution of possible outcomes was fairly wide last year. The distribution is based on how accurate polls of U.S. presidential elections have been since 1972, accounting for the number of undecideds and the number of days until the election. The distribution was wider than usual because there were a lot of undecided voters — and more undecided voters mean more uncertainty. Even in a normal year, however, the polls aren’t quite as precise as most people assume.
  • Third, the forecast is continuous, rather than binary. When evaluating a poll or a polling-based forecast, you should look at the margin between the poll and the actual result and not just who won and lost. If a poll showed the Democrat winning by 1 point and the Republican won by 1 point instead, the poll did a better job than if the Democrat had won by 9 points (even though the poll would have “called” the outcome correctly in the latter case). By this measure, polls in this year’s French presidential election — which Emmanuel Macron was predicted to win by 22 points but actually won by 32 points — were much worse than polls of the 2016 U.S. election.
  • Finally, the actual outcome in last year’s election was right in the thick of the probability distribution, not out toward the tails. The popular vote was obviously pretty close to what the polls estimated it would be. It also wasn’t that much of a surprise that Trump won the Electoral College, given where the popular vote wound up. (Our forecast gave Trump a better than a 25 percent chance of winning the Electoral College conditional on losing the popular vote by 2 points,23 an indication of his demographic advantages in the swing states.) One might dare even say that the result last year was relatively predictable, given the range of possible outcomes.

The press presumed that Clinton would win, but the public saw a close race

I’ve often heard it asserted that the widespread presumption of an inevitable Clinton victory was itself a problem for her campaign24 — Clinton has even made a version of his claim herself. So we have to ask: Could this misreading of the polls — and polling-based forecasts — actually have affected the election’s outcome?

It depends on whether you’re talking about how the media and other political elites read the polls — and how that influenced their behavior — or how the general public did. Regular voters, it turns out, were not especially confident about Clinton’s chances last year. For instance, in the final edition of the USC Dornsife/Los Angeles Times tracking poll, which asked voters to guess the probability of Trump and Clinton winning the election, the average voter gave Clinton only a 53 percent chance of winning and gave Trump a 43 percent chance — so while respondents slightly favored Clinton, it wasn’t with much confidence at all.

The American National Election Studies also asked voters to predict the most likely winner of the race, as it’s been doing since 1952. It found that 61 percent of voters expected Clinton to win, as compared to 33 percent for Trump.25 This proportion is about the same as other years — such as 2004 — in which polls showed a fairly close race, although one candidate (in that case, George W. Bush) was usually ahead. While, unlike the LA Times poll, the ANES did not ask voters to estimate the probability of Clinton winning, it did ask voters a follow-up question about whether they expected the election to be close or thought one of the candidates would “win by quite a bit.” Only 20 percent of respondents predicted a Clinton landslide, and only 7 percent expected a Trump landslide. Instead, almost three-quarters of voters correctly predicted a close outcome.

Voters weren’t overly bullish on Clinton’s chances

Confidence in each party’s presidential candidate in the months before elections

2016 61%
2012 64
2008 59
2004 29
2000 47
1996 86
1992 56
1988 23
1984 12
1980 46
1976 43
1972 7
1968 22
1964 81
1960 33
1956 19
1952 35

Source: American National Election Studies

So be wary if you hear people within the media bubble26 assert that “everyone” presumed Clinton was sure to win. Instead, that presumption reflected elite groupthink — and it came despite the polls as much as because of the polls. There was a bewilderingly large array of polling data during last year’s campaign, and it didn’t always tell an obvious story. During the final week of the campaign, Clinton was ahead in most polls of most swing states, but with quite a few exceptions27 — and many of Clinton’s leads were within the margin of error and had been fading during the final 10 days of the campaign. The public took in this information and saw Clinton as the favorite, but they didn’t expect a blowout and viewed the outcome as highly uncertain. Our model read it the same way. The media looked at the same ambiguous data and saw what they wanted in it, using it confirm their presumption that Trump couldn’t win.

News organizations learned the wrong lessons from 2012

During the 2012 election, FiveThirtyEight’s forecast consistently gave Obama better odds of winning re-election than the conventional wisdom did. Somehow in the midst of it, I became an avatar for projecting certainty in the face of doubt. But this role was always miscast — even quite opposite of what I hope readers take away from FiveThirtyEight’s work. In addition to making my own forecasts, I’ve spent a lot of my life studying probability and uncertainty. Cover these topics for long enough and you’ll come to a fairly clear conclusion: When it comes to making predictions, the world usually needs less certainty, not more.

A major takeaway from my book and from other people’s research on prediction is that most experts — including most journalists — make overconfident forecasts. (Weather forecasters are an important exception.) Events that experts claim to be nearly certain (say, a 95 percent probability) are often merely probable instead (the real probability is, say, 70 percent). And events they deem to be nearly impossible occur with some frequency. Another, related type of bias is that experts don’t change their minds quickly enough in the face of new information,28 sticking stubbornly to their previous beliefs even after the evidence has begin to mount against them.

Media coverage of major elections had long been an exception to this rule of expert overconfidence. For a variety of reasons — no doubt including the desire to inject drama into boring races — news coverage tended to overplay the underdog’s chances in presidential elections and to exaggerate swings in the polls. Even in 1984, when Ronald Reagan led Walter Mondale by 15 to 20 percentage points in the stretch run of the campaign, The New York Times somewhat credulously reported on Mondale’s enthusiastic crowds and talked up the possibility of a Dewey-defeats-Truman upset. The 2012 election — although it was a much closer race than 1984 — was another such example: Reporting focused too much on national polls and not enough on Obama’s Electoral College advantage, and thus portrayed the race as a “toss-up” when in reality Obama was a reasonably clear favorite. (FiveThirtyEight’s forecast gave Obama about a 90 percent chance of winning re-election on election morning.)

Since then, the pendulum has swung too far in the other direction, with the media often expressing more certainty about the outcome than is justified based on the polls. In addition to lowballing the chances for Trump, the media also badly underestimated the probability that the U.K. would leave the European Union in 2016, and that this year’s U.K. general election would result in a hung parliament, for instance. There are still some exceptions — the conventional wisdom probably overestimated Marine Le Pen’s chances in France. Nonetheless, there’s been a noticeable shift from the way elections used to be covered, and it’s worth pausing to consider why that is.

One explanation is that news organizations learned the wrong lessons from 2012. The “moral parable” of 2012, as Scott Alexander wrote, is that Romney was “the arrogant fool who said that all the evidence against him was wrong, but got his comeuppance.” Put another way, the lesson of 2012 was to “trust the data,” especially the polls.

FiveThirtyEight and I became emblems of that narrative, even though we sometimes tried to resist it. What I think people forget is that the confidence our model expressed in Obama’s chances in 2012 was contingent upon circumstances peculiar to 2012 — namely that Obama had a much more robust position in the Electoral College than national polls implied, and that there were very few undecided voters, reducing uncertainty. The 2012 election may have superficially looked like a toss-up, but Obama was actually a reasonably clear favorite. Pretty much the opposite was true in 2016 — the more carefully one evaluated the polls, the more uncertain the outcome of the Electoral College appeared. The real lesson of 2012 wasn’t “always trust the polls” so much as “be rigorous in your evaluation of the polls, because your superficial impression of them can be misleading.”

Another issue is that uncertainty is a tough sell in a competitive news environment. “The favorite is indeed favored, just not by as much as everyone thinks once you look at the data more carefully, so bet on the favorite at even money but the underdog against the point spread” isn’t that complicated a story, but it can be a difficult message to get across on TV in the midst of an election campaign when everyone has the attention span of a sugar-high 4-year-old. It can be even harder on social media, where platforms like Facebook reward simplistic coverage that confirms people’s biases.

Journalists should be wary of ‘the narrative’ and more transparent about their provisional understanding of developing stories

But every news organization faced competitive pressure in covering last year’s election — and only some of them screwed up the story. Editorial culture mattered a lot. In general, the problems were worse at The New York Times and other organizations that (as Michael Cieply, a former Times editor, put it) heavily emphasized “the narrative” of the campaign and encouraged reporters to “generate stories that fit the pre-designated line.”

If you re-read the Times’ general election coverage from the conventions onward,29 you’ll be struck by how consistent it was from start to finish. Although the polls were fairly volatile in 2016, you can’t really distinguish the periods when Clinton had a clear advantage from those when things were pretty tight. Instead, the narrative was consistent: Clinton was a deeply flawed politician, the “worst candidate Democrats could have run,” cast in “shadows” and “doubts” because of her ethical lapses. However, she was almost certain to win because Trump appealed to too narrow a range of demographic groups and ran an unsophisticated campaign, whereas Clinton’s diverse coalition and precise voter-targeting efforts gave her an inherent advantage in the Electoral College.

It was a consistent story, but it was consistently wrong.

One can understand why news organizations find “the narrative” so tempting. The world is a complicated place, and journalists are expected to write authoritatively about it under deadline pressure. There’s a management consulting adage that says when creating a product, you can pick any two of these three objectives: 1. fast, 2. good and 3. cheap. You can never have all three at once. The equivalent in journalism is that a story can be 1. fast, 2. interesting and/or 3. true — two out of the three — but it’s hard for it to be all three at the same time.

Deciding on the narrative ahead of time seems to provide a way out of the dilemma. Pre-writing substantial portions of the story — or at least, having a pretty good idea of what you’re going to say — allows it to be turned around more quickly. And narratives are all about wrapping the story up in a neat-looking package and telling readers “what it all means,” so the story is usually engaging and has the appearance of veracity.

The problem is that you’re potentially sacrificing No. 3, “true.” By bending the facts to fit your template, you run the risk of getting the story completely wrong. To make matters worse, most people — including most reporters and editors (also: including me) — have a strong tendency toward confirmation bias. Presented with a complicated set of facts, it takes a lot of work for most of us not to connect the dots in a way that confirms our prejudices. An editorial culture that emphasizes “the narrative” indulges these bad habits rather than resists them.

Instead, news organizations reporting under deadline pressure need to be more comfortable with a world in which our understanding of developing stories is provisional and probabilistic — and will frequently turn out to be wrong. FiveThirtyEight’s philosophy is basically that the scientific method, with its emphasis on verifying hypotheses through rigorous analysis of data, can serve as a model for journalism. The reason is not because the world is highly predictable or because data can solve every problem, but because human judgment is more fallible than most people realize — and being more disciplined and rigorous in your approach can give you a fighting chance of getting the story right. The world isn’t one where things always turn out exactly as we want them to or expect them to. But it’s the world we live in.

CORRECTION (Sept. 21, 2:40 p.m.): A previous version of footnote No. 10 mistakenly referred to the Electoral College in place of the national popular vote when discussing Trump’s chances of winning the election. The article has been updated.

2017-09-21T09:47:18+00:00 Nate Silver FiveThirtyEight Dynamically Pricing Hotel Rooms for Maximum Revenue Comments ]]> 2017-09-21T02:02:53+00:00 DataTau This 'AI' standing desk really just has a touchscreen tablet built-in

I have a standing desk. I like it. I'm standing right now, actually, and blogging at the same time. My standing desk is automated, so I push buttons on its attached remote, and it goes up and down. It also has some onboard memory and saves heights for me. It's neat but not too wild. Meanwhile, a company called Autonomous says its new SmartDesk 3 is "the world's most powerful AI-powered standing desk."

Should I question the intelligence of my standing desk? What even is an AI desk? To Autonomous, it means a desk that can order food for you. The company apparently partnered with to offer food suggestions on the desk's built-in control panel. It can "anticipate when you'll be hungry," too. I would guess this means it knows the...

Continue reading…

2017-09-20T21:31:28+00:00 Ashley Carman The Verge - All Posts The First Web Apps

the stories behind five web apps that launched in 1995, with extensive research and interviews

2017-09-20T22:14:05+00:00 Andy Baio Google Search Console

Google Search Console (GSC), formerly Webmaster Tools, is a free service offered by Google that helps you monitor and maintain your site’s presence in Google search results. You don’t need to sign up for GSC for your site to be included in Google’s search results, but doing so can help you understand how Google views your site and optimize its performance in search results. Unfortunately, Google only stores the past 90 days of search analytic data within GSC, therefore clients must develop an archiving process to make quarter-over-quarter or year-over-year comparisons of general development of SEO rankings.  



When choosing a service to develop a solution to continuously archive search analytics data, ensure you look at these google cloud platform products:

Maintain and worry free database cloud application to store and query your data from.

Powerful platform to build both web and mobile apps. It’s very scalable and takes away the headache that comes along with a regular server.

Simple cloud storage solution to store data of all types and formats


Using these three products in conjunction with the GSC API which can query your search analytics data back 90 days, Analytics Pros developed a solution to archive GSC data as it becomes available.  The architecture looks as follows:



Besides syncing the daily data that becomes available after a few days, the program can also kick start the BigQuery tables by backfilling the past 90 days of available data. Having data available for year-over-year comparison can give you insight into keyword search performance and the CTR of your landing pages. With this information in hand, you can put together a plan to address any issues found. Archiving search analytics data in BigQuery allows you to analyze how you perform in yearly annual seasonal search trends—something not possible without implementing an archiving process.




Reach out to Analytics Pros if you’re interested in deploying this solution. We can help!

2017-09-20T22:20:40+00:00 Gord Nuttall Analytics Pros Interactively exploring every NBA play since 2004 Comments ]]> 2017-09-20T23:03:24+00:00 DataTau Call It 'Fintech' and Watch the Valuation Soar

China’s ZhongAn insurance company wants to be seen as tech play, rather than a boring old insurer.

2017-09-20T15:47:10+00:00 Markets Adobe's Cloud Hits First Headwind

Larger deals have been taking longer to close, but software maker’s cloud transition still packs a lot of growth ahead.

2017-09-20T16:34:21+00:00 Markets What makes predicting customer churn a challenge? Comments ]]> 2017-09-20T18:02:29+00:00 DataTau Datasets for Building a Data Analysis Portfolio Comments ]]> 2017-09-20T18:02:29+00:00 DataTau How Clorox uses analytics to innovate after 105 years on top

A quick Silicon Valley success story: a few years back, five Bay Area venture capitalists invested $100 each in a consumer packaged goods startup called “The Electro-Alkaline Company” that they believed was set to disrupt the garment cleaning industry. Clunky name aside, they saw a product that would radically change people’s lives at home, extending the longevity of their clothes by years and rendering previously unwearable garments as good as new at an unbeatable price.

The product worked. It was an improvement on centuries-old technology, and savvy management helped grow the business and make it more affordable for consumers than it ever had been before. In 1928, after fifteen years of growth and a name change, Clorox went public and has been a leader in consumer packaged goods ever since. Those five investors most certainly got their $100 worth.

Point being: Clorox has been around a while. Over the years, Clorox has expanded its portfolio. It’s not just a bleach company—other assets include Hidden Valley Ranch, KC Masterpiece, Fresh Step, Burt’s Bees and others. Clorox has adapted to everything that has happened since its founding in 1913, and it’s not afraid of the digital transformation that it knows it needs to make—or rather, is making—to continue growing. Historically, Clorox’s products have been sold primarily in big box stores, and though that is still the case, the company has seen double-digit growth in its e-commerce business in each of the past two years—a trend that Web Analytics Group Manager Kesha Patel is responsible for maintaining. That’s why she–like those five initial investors in Clorox–bet big. Kesha, part of a cross-functional team of data scientists, developers, analysts and product experts, had the goal of increasing the engagement of their best customers on the website of one of their brands. The team saw success. Their efforts spurred a 30% increase in engagement, and now they have a powerful tool that can be deployed across their portfolio. Here’s what they built and how they did it.

Being on top isn’t good enough

When you’re already a market leader, innovation may not come as easily, but it can bring big rewards. “We don’t want to lose relevance,” Kesha said. “We can’t just rest on our laurels and be happy with being the market leader. Every aspect of our business, including the website, should serve a clear purpose.”

So here’s the question she needed to answer: what should a CPG company’s website, you know, do?

The good news: Clorox’s wide variety of brands and forward-looking leadership allowed for more experimentation than would be possible at other companies. The bad news: it was quickly apparent that the most obvious idea of what to do with these websites was not going to be a feasible solution.

“We don’t really get sales from our site. It’s not something we do, and if we approach the website as a sales portal, we’re setting ourselves up for failure,” Kesha said.

Instead, they would have to use the websites as places where users engaged with content related to the products and developed brand loyalty. To that end, they decided to run an experiment with the Hidden Valley Ranch site. The test: will a personalization engine increase retention and engagement on the website? And what would that look like?

Kesha knew there were two metrics that she really cared about. “We decided to focus on retention and engagement. Could we get consumers to visit one more page? Could we get them to return to the site again? Are we seeing them become registered members of our rewards program?”

Building the engine

The personalization engine they built was not a massive overhaul of the website, but a few simple adjustments. Instead of the same hard-coded recipe recommendations on specific pages going to everyone, viewers would now see recommendations based on what pages they had previously visited. The goal was modest and ambitious at the same time. It sounds simple enough, but first, they needed someone who could build an algorithm to power the engine.

Fortunately, Naveen Kolagatla, a Data Scientist, was more than capable. He knew where to start: “We needed to track and collect both registered and anonymous user data on our site.” It took Naveen four months to design an algorithm based on their data that would effectively tailor individual experiences and recommendations within the site.

The process required Naveen to not just take in data, but to interpret it with the kind of complex analytics that serve as the backbone of any personalization engine. “We built statistical models to dig deep, and saw the evidence suggesting that personalized recommendations would delight our customers.”

And Naveen couldn’t do it alone. “We realized personalization was a team sport,” Naveen told us. “We had tagging/analytics, developers, creative development, copywriters and brand folks working together to make this happen.”

Kesha agreed: “If we were going to do this project, it was imperative that we track the results to make sure we were spending time and money wisely.”

Personalization pays off

“We ran an A/B test beginning of this year,” Naveen said. “Our results were positive. Consumers who received the personalized experience viewed and engaged with recipes 3x more than consumers who received the static experience. A more surprising aspect of this test was consumers were organically exploring a wider variety of recipes after being exposed to personalized test.”

After launching with a personalization engine that only targeted return users, the lift was undeniable. Think of the person you know who loves ranch dressing the most. Imagine them slathering it all over all of their favorite foods: pizza, salads, burgers, everything. It’s fair to say that people making repeated visits to a ranch dressing website are these kinds of people. These are the people Hidden Valley Ranch needs to reach–the one percent of the one percent of ranch dressing consumers. And Hidden Valley Ranch’s personalization engine was able to hold their attention and keep them actively engaged on the site for longer. Time and time again, they found the suggestions from the personalization engine to be delightful.

What’s next?

“The lessons of the Hidden Valley Ranch personalization engine experiment transcend the brand,” Kesha said. “If we can serve content that consumers are interested in, they will engage with it. And there’s a clear line from increased engagement to increased sales.”

But it takes careful preparation and work. Kesha was clear about one thing: if you want to have a personalization engine, you need the data upon which you base your underlying assumptions to be good.

“I think so many companies struggle with data integrity–trusting that what they are looking at is accurate. Because if that foundational data is not accurate, then what is a team supposed to do? Make sure you do things right the first time, otherwise you have to reimplement and instrument everything differently.”

Naveen agreed: “Follow the data! Start with small tests and scale big after you see the value.”

Kesha was excited about the potential for the personalization engine. “There is no shortage of brands that Clorox can use it on. Burt’s Bees, Kingsford, Soy Vay, Formula 409 and Fresh Step all represent exciting opportunities. We’re looking at it now, but it’s something you’re going to see rolled out across our brands in the future.”

It’s a future the Electro-Alkaline Company never could’ve imagined.

The post How Clorox uses analytics to innovate after 105 years on top appeared first on Mixpanel.

2017-09-19T07:30:01+00:00 Jordan Carr Mixpanel - Analytics for startups LEGO color scheme classifications

Nathanael Aff poked at LEGO brickset data with some text mining methods in search for recurring color schemes in LEGO sets. This is what he got.

Tags: ,

2017-09-19T07:54:10+00:00 Nathan Yau FlowingData Configuring Google Analytics in Google Tag Manager

Google Tag Manager (GTM) allows you to set up Google Analytics, from tracking simple pageviews to custom events, without having to add additional code to your site.

In this article, we’ll walk through the process of configuring Google Analytics in GTM.

Deploying the Global Google Analytics Code

To start, you’ll need to have created a GTM account and placed the container code on your site.

First, let’s cover getting the basic Google Analytics tracking code in place across your site. Create a new tag and click within the “Tag Configuration box to choose a tag type. Select Universal Analytics.

Leave “Track Type” as “Pageview” and select “New Variable” under the Google Analytics Settings dropdown.

A new box will appear where you can insert your Google Analytics ID, which you can find in the Admin section of your Google Analytics account (Property Settings > Tracking Info > Tracking Code). Copy and paste this ID number into the “Tracking ID” box. Save and note that, in the future, you can simply select this variable instead of entering the ID each time you create a new tag.

Next, you’ll need to define the trigger, which determines where the tag containing the Google Analytics code will fire on your site. In most cases, you’ll want the code to be deployed across your entire site, so you can select All Pages. Note that you can set up a custom trigger if you only want the code to fire on certain pages or exclude pages.

Save your tag and click Submit to deploy it live.

You can use the Tag Assistant extension to verify that the code is firing properly on your site.

Tracking Custom Events

Custom events in Google Analytics allow you to record activities such as clicks, video views, or form submissions that may not be tracked by default. GTM allows you to fire events into your Analytics account based on the triggers you define. While there are many possible uses, we’ll cover a couple of common ones here.

Form Tracking

If a form has a “thank you” page, it’s simple to track a goal in Google Analytics by inserting the URL. However, many sites contain forms that submit within the same page without the URL changing. In this case, you’ll need to fire an event for the form submission instead of defining a “thank you” URL.

Thankfully, GTM’s form trigger provides a handy workaround. To get started, create a tag and select Universal Analytics. Next, choose a Track Type of Event. Now, define the event parameters; in this case, we’re using the following:

  • Category: form
  • Action: submit
  • Label: contact

Note that you’ll need to create a goal in Google Analytics utilizing the same parameters you use for the event in GTM. Next, create your trigger and choose Form Submission.

In the box that appears, configure the form trigger details. The “Wait for Tags” checkbox ensures that GTM waits for necessary tags to fire before the form submits, to ensure that all proper pieces are in place for tracking. The “Check Validation” box, if checked, only allows the tag to fire if the form is successfully submitted.

The next field defines where the trigger should be actively listening for form submissions. In this case, we’re tracking a form on a “Contact” page and define the URL accordingly. Finally, the “All Forms/Some Forms” options allow you to define a specific form to track if there are multiple.

Save your tag and test to ensure that GTM does indeed detect the submissions to fire the tag. Use the Preview feature in GTM for testing this configuration.

Note that some types of forms (such as those built using Ajax) aren’t detected by the GTM form trigger and require additional workarounds that entail development support. For a more advanced discussion, see Simo Ahava’s article on Tracking Form Engagement with Google Tag Manager.

Click Tracking

By default, Google Analytics doesn’t track clicks that don’t lead to a different URL, take a user to a different domain, or open a document such as a PDF. GTM can fire events for tracking a wide range of click activity.

As an example, let’s cover firing an event for PDF clicks. Create a Universal Analytics tag and select Event as the tag type. Define the parameters for the event:

  • Category: PDF
  • Action: Click
  • Label {{Click URL}} (this is a variable that will auto-populate the URL of the document clicked)

Next, let’s set up the trigger, choosing “Click – Just Links” as the trigger type. Under “Enable this trigger when,” select Page URL > Matches Regex and insert .* into the text field (matching all pages).

Under “This trigger fires on,” select “Some Link Clicks.” Choose Click URL > Ends with and insert .pdf into the text field. Finally, save the trigger and test it to ensure you’re indeed seeing events show up in Google Analytics for PDF clicks.

We’ve covered the basics of getting Google Analytics in place via GTM, but countless possible custom uses exist for your tracking needs! For more reading, see:

We’d love to hear how you utilize Google Tag Manager in the comments!

2017-09-19T12:00:36+00:00 Tim Jensen The Clix Marketing Blog Pirating Web Content Responsibly With R

(This article was first published on R –, and kindly contributed to R-bloggers)

International Code Talk Like A Pirate Day almost slipped by without me noticing (September has been a crazy busy month), but it popped up in the calendar notifications today and I was glad that I had prepped the meat of a post a few weeks back.

There will be no ‘rrrrrr’ abuse in this post, I’m afraid, but there will be plenty of R code.

We’re going to combine pirate day with “pirating” data, in the sense that I’m going to show one way on how to use the web scraping powers of R responsibly to collect data on and explore modern-day pirate encounters.

Scouring The Seas Web For Pirate Data

Interestingly enough, there are many of sources for pirate data. I’ve blogged a few in the past, but I came across a new (to me) one by the International Chamber of Commerce. Their Commercial Crime Services division has something called the Live Piracy & Armed Robbery Report:

(site png snapshot taken with splashr)

I fiddled a bit with the URL and — sure enough — if you work a bit you can get data going back to late 2013, all in the same general format, so I jotted down base URLs and start+end record values and filed them away for future use:

library(jwatr) # github/hrbrmstr/jwatr

report_urls %
  pull(url_list) %>%
  flatten_chr() -> target_urls

## [1] ""
## [2] ""
## [3] ""
## [4] ""
## [5] ""
## [6] ""

Time to pillage some details!

But…Can We Really Do It?

I poked around the site’s terms of service/terms and conditions and automated retrieval was not discouraged. Yet, those aren’t the only sea mines we have to look out for. Perhaps they use their robots.txt to stop pirates. Let’s take a look:

## # If the Joomla site is installed within a folder such as at
## # e.g. the robots.txt file MUST be
## # moved to the site root at e.g.
## # AND the joomla folder name MUST be prefixed to the disallowed
## # path, e.g. the Disallow rule for the /administrator/ folder
## # MUST be changed to read Disallow: /joomla/administrator/
## #
## # For more information about the robots.txt standard, see:
## #
## #
## # For syntax checking, see:
## #
## User-agent: *
## Disallow: /administrator/
## Disallow: /cache/
## Disallow: /cli/
## Disallow: /components/
## Disallow: /images/
## Disallow: /includes/
## Disallow: /installation/
## Disallow: /language/
## Disallow: /libraries/
## Disallow: /logs/
## Disallow: /media/
## Disallow: /modules/
## Disallow: /plugins/
## Disallow: /templates/
## Disallow: /tmp/

Ahoy! We’ve got a license to pillage!

But, we don’t have a license to abuse their site.

While I still haven’t had time to follow up on an earlier post about ‘crawl-delay’ settings across the internet I have done enough work on it to know that a 5 or 10 second delay is the most common setting (when sites bother to have this directive in their robots.txt file). ICC’s site does not have this setting defined, but we’ll still pirate crawl responsibly and use a 5 second delay between requests:

s_GET  httr_raw_responses

write_rds(httr_raw_responses, "data/2017-icc-ccs-raw-httr-responses.rds")


There are more “safety” measures you can use with httr::GET() but this one is usually sufficient. It just prevents the iteration from dying when there are hard retrieval errors.

I also like to save off the crawl results so I can go back to the raw file (if needed) vs re-scrape the site (this crawl takes a while). I do it two ways here, first using raw httr response objects (including any “broken” ones) and then filtering out the “complete” responses and saving them in WARC format so it’s in a more common format for sharing with others who may not use R.

Digging For Treasure

Did I mention that while the site looks like it’s easy to scrape it’s really not easy to scrape? That nice looking table is a sea mirage ready to trap unwary sailors crawlers in a pit of despair. The UX is built dynamically from on-page javascript content, a portion of which is below:

Now, you’re likely thinking: “Don’t we need to re-scrape the site with seleniumPipes or splashr?”

Fear not, stout yeoman! We can do this with the content we have if we don’t mind swabbing the decks first. Let’s put the map code up first and then dig into the details:

# make field names great again
mfga  pirate_cols

# iterate over the good responses with a progress bar
pb ` tag that has our data, carve out the target lines, do some data massaging and evaluate the javascript with V8
  html_nodes(doc, xpath=".//script[contains(., 'requirejs')]") %>%
    html_text() %>%
    stri_split_lines() %>%
    .[[1]] %>%
    grep("narrations_ro", ., value=TRUE) %>%
    sprintf("var dat = %s;", .) %>%

  p %
        field = mfga(.x[[3]]$label),
        value = .x[[3]]$value
    }) %>%
    filter(value != "") %>%
    distinct(field, .keep_all = TRUE) %>%
    spread(field, value)

}) %>%
  type_convert(col_types = pirate_cols) %>%
  filter(stri_detect_regex(attack_number, "^[[:digit:]]")) %>%
  filter(lubridate::year(date) > 2012) %>%
    attack_posn_map = stri_replace_last_regex(attack_posn_map, ":.*$", ""),
    attack_posn_map = stri_replace_all_regex(attack_posn_map, "[\\(\\) ]", "")
  ) %>%
  separate(attack_posn_map, sep=",", into=c("lat", "lng")) %>%
  mutate(lng = as.numeric(lng), lat = as.numeric(lat)) -> pirate_df

write_rds(pirate_df, "data/pirate_df.rds")

The first bit there is a function to “make field names great again”. We’re processing some ugly list data and it’s not all uniform across all years so this will help make the data wrangling idiom more generic.

Next, I setup a cols object because we’re going to be extracting data from text as text and I think it’s cleaner to type_convert at the end vs have a slew of as.numeric() (et al) statements in-code (for small mumnging). You’ll note at the end of the munging pipeline I still need to do some manual conversions.

Now we can iterate over the good (complete) responses.

The purrr::safely function shoves the real httr response in result so we focus on that then “surgically” extract the target data from the javascript engine and then retrieve the data from said evaluation.

Because ICC used the same Joomla plugin over the years, the data is uniform, but also can contain additional fields, so we extract the fields in a generic manner. During the course of data wrangling, I noticed there were often multiple Date: fields, so we throw in some logic to help avoid duplicate field names as well.

That whole process goes really quickly, but why not save off the clean data at the end for good measure?

Gotta Have A Pirate Map

Now we can begin to explore the data. I’ll leave most of that to you (since I’m providing the scraped data oh github), but here are a few views. First, just some simple counts per month:

mutate(pirate_df, year = lubridate::year(date), year_mon = as.Date(format(date, "%Y-%m-01"))) %>%
  count(year_mon) %>%
  ggplot(aes(year_mon, n)) +
  geom_segment(aes(xend=year_mon, yend=0)) +
  scale_y_comma() +
  labs(x=NULL, y=NULL,
       title="(Confirmed) Piracy Incidents per Month",
       caption="Source: International Chamber of Commerce Commercial Crime Services ") +

And, finally, a map showing pirate encounters but colored by year:

world %
  arrange(year) %>%
  mutate(year = factor(year)) -> plot_df

ggplot() +
  geom_map(data = world, map = world, aes(x=long, y=lat, map_id=region), fill="#b2b2b2") +
  geom_point(data = plot_df, aes(lng, lat, color=year), size=2, alpha=1/3) +
  ggalt::coord_proj("+proj=wintri") +
  viridis::scale_color_viridis(name=NULL, discrete=TRUE) +
  labs(x=NULL, y=NULL,
       title="Piracy Incidents per Month (Confirmed)",
       caption="Source: International Chamber of Commerce Commercial Crime Services ") +
  theme_ipsum_rc(grid="XY") +
  theme(legend.position = "bottom")

Taking Up The Mantle of the Dread Pirate Hrbrmstr

Hopefully this post shed some light on scraping responsibly and using different techniques to get to hidden data in web pages.

There’s some free-form text and more than a few other ways to look at the data. You can find the code and data on Github and don’t hesitate to ask questions in the comments or file an issue. If you make something blog it! Share your ideas and creations with the rest of the R (or other language) communities!

To leave a comment for the author, please follow the link and comment on their blog: R – offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
2017-09-19T12:28:03+00:00 hrbrmstr R-bloggers So You Want to Build a Chat Bot &ndash; Here's How (Complete with Code!)

Posted by R0bin_L0rd

You’re busy and (depending on effective keyword targeting) you’ve come here looking for something to shave months off the process of learning to produce your own chat bot. If you’re convinced you need this and just want the how-to, skip to "What my bot does." If you want the background on why you should be building for platforms like Google Home, Alexa, and Facebook Messenger, read on.

Why should I read this?

Do you remember when it wasn't necessary to have a website? When most boards would scoff at the value of running a Facebook page? Now Gartner is telling us that customers will manage 85% of their relationship with brands without interacting with a human by 2020 and publications like Forbes are saying that chat bots are the cause.

The situation now is the same as every time a new platform develops: if you don’t have something your customers can access, you're giving that medium to your competition. At the moment, an automated presence on Google Home or Slack may not be central to your strategy, but those who claim ground now could dominate it in the future.

The problem is time. Sure, it'd be ideal to be everywhere all the time, to have your brand active on every platform. But it would also be ideal to catch at least four hours sleep a night or stop covering our keyboards with three-day-old chili con carne as we eat a hasty lunch in between building two of the Next Big Things. This is where you’re fortunate in two ways;

  1. When we develop chat applications, we don’t have to worry about things like a beautiful user interface because it’s all speech or text. That's not to say you don't need to worry about user experience, as there are rules (and an art) to designing a good conversational back-and-forth. Amazon is actually offering some hefty prizes for outstanding examples.
  2. I’ve spent the last six months working through the steps from complete ignorance to creating a distributable chat bot and I’m giving you all my workings. In this post I break down each of the levels of complexity, from no-code back-and-forth to managing user credentials and sessions the stretch over days or months. I’m also including full code that you can adapt and pull apart as needed. I’ve commented each portion of the code explaining what it does and linking to resources where necessary.

I've written more about the value of Interactive Personal Assistants on the Distilled blog, so this post won't spend any longer focusing on why you should develop chat bots. Instead, I'll share everything I've learned.

What my built-from-scratch bot does

Ever since I started investigating chat bots, I was particularly interested in finding out the answer to one question: What does it take for someone with little-to-no programming experience to create one of these chat applications from scratch? Fortunately, I have direct access to someone with little-to-no experience (before February, I had no idea what Python was). And so I set about designing my own bot with the following hard conditions:

  1. It had to have some kind of real-world application. It didn't have to be critical to a business, but it did have to bear basic user needs in mind.
  2. It had to be easily distributable across the immediate intended users, and to have reasonable scope to distribute further (modifications at most, rather than a complete rewrite).
  3. It had to be flexible enough that you, the reader, can take some free code and make your own chat bot.
  4. It had to be possible to adapt the skeleton of the process for much more complex business cases.
  5. It had to be free to run, but could have the option of paying to scale up or make life easier.
  6. It had to send messages confirming when important steps had been completed.

The resulting program is "Vietnambot," a program that communicates with Slack, the API.AI linguistic processing platform, and Google Sheets, using real-time and asynchronous processing and its own database for storing user credentials.

If that meant nothing to you, don't worry — I'll define those things in a bit, and the code I'm providing is obsessively commented with explanation. The thing to remember is it does all of this to write down food orders for our favorite Vietnamese restaurant in a shared Google Sheet, probably saving tens of seconds of Distilled company time every year.

It's deliberately mundane, but it's designed to be a template for far more complex interactions. The idea is that whether you want to write a no-code-needed back-and-forth just through API.AI; a simple Python program that receives information, does a thing, and sends a response; or something that breaks out of the limitations of linguistic processing platforms to perform complex interactions in user sessions that can last days, this post should give you some of the puzzle pieces and point you to others.

What is API.AI and what's it used for?

API.AI is a linguistic processing interface. It can receive text, or speech converted to text, and perform much of the comprehension for you. You can see my Distilled post for more details, but essentially, it takes the phrase “My name is Robin and I want noodles today” and splits it up into components like:

  • Intent: food_request
  • Action: process_food
  • Name: Robin
  • Food: noodles
  • Time: today

This setup means you have some hope of responding to the hundreds of thousands of ways your users could find to say the same thing. It’s your choice whether API.AI receives a message and responds to the user right away, or whether it receives a message from a user, categorizes it and sends it to your application, then waits for your application to respond before sending your application’s response back to the user who made the original request. In its simplest form, the platform has a bunch of one-click integrations and requires absolutely no code.

I’ve listed the possible levels of complexity below, but it’s worth bearing some hard limitations in mind which apply to most of these services. They cannot remember anything outside of a user session, which will automatically end after about 30 minutes, they have to do everything through what are called POST and GET requests (something you can ignore unless you’re using code), and if you do choose to have it ask your application for information before it responds to the user, you have to do everything and respond within five seconds.

What are the other things?

Slack: A text-based messaging platform designed for work (or for distracting people from work).

Google Sheets: We all know this, but just in case, it’s Excel online.

Asynchronous processing: Most of the time, one program can do one thing at a time. Even if it asks another program to do something, it normally just stops and waits for the response. Asynchronous processing is how we ask a question and continue without waiting for the answer, possibly retrieving that answer at a later time.

Database: Again, it’s likely you know this, but if not: it’s Excel that our code will use (different from the Google Sheet).

Heroku: A platform for running code online. (Important to note: I don’t work for Heroku and haven’t been paid by them. I couldn’t say that it's the best platform, but it can be free and, as of now, it’s the one I’m most familiar with).

How easy is it?

This graph isn't terribly scientific and it's from the perspective of someone who's learning much of this for the first time, so here’s an approximate breakdown:



Time it took me


You set up the conversation purely through API.AI or similar, no external code needed. For instance, answering set questions about contact details or opening times

Half an hour to distributable prototype


A program that receives information from API.AI and uses that information to update the correct cells in a Google Sheet (but can’t remember user names and can’t use the slower Google Sheets integrations)

A few weeks to distributable prototype


A program that remembers user names once they've been set and writes them to Google Sheets. Is limited to five seconds processing time by API.AI, so can’t use the slower Google Sheets integrations and may not work reliably when the app has to boot up from sleep because that takes a few seconds of your allocation*

A few weeks on top of the last prototype


A program that remembers user details and manages the connection between API.AI and our chosen platform (in this case, Slack) so it can break out of the five-second processing window.

A few weeks more on top of the last prototype (not including the time needed to rewrite existing structures to work with this)

*On the Heroku free plan, when your app hasn’t been used for 30 minutes it goes to sleep. This means that the first time it’s activated it takes a little while to start your process, which can be a problem if you have a short window in which to act. You could get around this by (mis)using a free “uptime monitoring service” which sends a request every so often to keep your app awake. If you choose this method, in order to avoid using all of the Heroku free hours allocation by the end of the month, you’ll need to register your card (no charge, it just gets you extra hours) and only run this application on the account. Alternatively, there are any number of companies happy to take your money to keep your app alive.

For the rest of this post, I’m going to break down each of those key steps and either give an overview of how you could achieve it, or point you in the direction of where you can find that. The code I’m giving you is Python, but as long as you can receive and respond to GET and POST requests, you can do it in pretty much whatever format you wish.

1. Design your conversation

Conversational flow is an art form in itself. Jonathan Seal, strategy director at Mando and member of British Interactive Media Association's AI thinktank, has given some great talks on the topic. Paul Pangaro has also spoken about conversation as more than interface in multiple mediums.

Your first step is to create a flow chart of the conversation. Write out your ideal conversation, then write out the most likely ways a person might go off track and how you’d deal with them. Then go online, find existing chat bots and do everything you can to break them. Write out the most difficult, obtuse, and nonsensical responses you can. Interact with them like you’re six glasses of wine in and trying to order a lemon engraving kit, interact with them as though you’ve found charges on your card for a lemon engraver you definitely didn’t buy and you are livid, interact with them like you’re a bored teenager. At every point, write down what you tried to do to break them and what the response was, then apply that to your flow. Then get someone else to try to break your flow. Give them no information whatsoever apart from the responses you’ve written down (not even what the bot is designed for), refuse to answer any input you don’t have written down, and see how it goes. David Low, principal evangelist for Amazon Alexa, often describes the value of printing out a script and testing the back-and-forth for a conversation. As well as helping to avoid gaps, it’ll also show you where you’re dumping a huge amount of information on the user.

While “best practices” are still developing for chat bots, a common theme is that it’s not a good idea to pretend your bot is a person. Be upfront that it’s a bot — users will find out anyway. Likewise, it’s incredibly frustrating to open a chat and have no idea what to say. On text platforms, start with a welcome message making it clear you’re a bot and giving examples of things you can do. On platforms like Google Home and Amazon Alexa users will expect a program, but the “things I can do” bit is still important enough that your bot won’t be approved without this opening phase.

I've included a sample conversational flow for Vietnambot at the end of this post as one way to approach it, although if you have ideas for alternative conversational structures I’d be interested in reading them in the comments.

A final piece of advice on conversations: The trick here is to find organic ways of controlling the possible inputs and preparing for unexpected inputs. That being said, the Alexa evangelist team provide an example of terrible user experience in which a bank’s app said: “If you want to continue, say nine.” Quite often questions, rather than instructions, are the key.

2. Create a conversation in API.AI

API.AI has quite a lot of documentation explaining how to create programs here, so I won’t go over individual steps.

Key things to understand:

You create agents; each is basically a different program. Agents recognize intents, which are simply ways of triggering a specific response. If someone says the right things at the right time, they meet criteria you have set, fall into an intent, and get a pre-set response.

The right things to say are included in the “User says” section (screenshot below). You set either exact phrases or lists of options as the necessary input. For instance, a user could write “Of course, I’m [any name]” or “Of course, I’m [any temperature].” You could set up one intent for name-is which matches “Of course, I’m [given-name]” and another intent for temperature which matches “Of course, I’m [temperature],” and depending on whether your user writes a name or temperature in that final block you could activate either the “name-is” or “temperature-is” intent.

The “right time” is defined by contexts. Contexts help define whether an intent will be activated, but are also created by certain intents. I’ve included a screenshot below of an example interaction. In this example, the user says that they would like to go to on holiday. This activates a holiday intent and sets the holiday context you can see in input contexts below. After that, our service will have automatically responded with the question “where would you like to go?” When our user says “The” and then any location, it activates our holiday location intent because it matches both the context, and what the user says. If, on the other hand, the user had initially said “I want to go to the theater,” that might have activated the theater intent which would set a theater context — so when we ask “what area of theaters are you interested in?” and the user says “The [location]” or even just “[location],” we will take them down a completely different path of suggesting theaters rather than hotels in Rome.

The way you can create conversations without ever using external code is by using these contexts. A user might say “What times are you open?”; you could set an open-time-inquiry context. In your response, you could give the times and ask if they want the phone number to contact you. You would then make a yes/no intent which matches the context you have set, so if your user says “Yes” you respond with the number. This could be set up within an hour but gets exponentially more complex when you need to respond to specific parts of the message. For instance, if you have different shop locations and want to give the right phone number without having to write out every possible location they could say in API.AI, you’ll need to integrate with external code (see section three).

Now, there will be times when your users don’t say what you're expecting. Excluding contexts, there are three very important ways to deal with that:

  1. Almost like keyword research — plan out as many possible variations of saying the same thing as possible, and put them all into the intent
  2. Test, test, test, test, test, test, test, test, test, test, test, test, test, test, test (when launched, every chat bot will have problems. Keep testing, keep updating, keep improving.)
  3. Fallback contexts

Fallback contexts don’t have a user says section, but can be boxed in by contexts. They match anything that has the right context but doesn’t match any of your user says. It could be tempting to use fallback intents as a catch-all. Reasoning along the lines of “This is the only thing they’ll say, so we’ll just treat it the same” is understandable, but it opens up a massive hole in the process. Fallback intents are designed to be a conversational safety net. They operate exactly the same as in a normal conversation. If a person asked what you want in your tea and you responded “I don’t want tea” and that person made a cup of tea, wrote the words “I don’t want tea” on a piece of paper, and put it in, that is not a person you’d want to interact with again. If we are using fallback intents to do anything, we need to preface it with a check. If we had to resort to it in the example above, saying “I think you asked me to add I don’t want tea to your tea. Is that right?” is clunky and robotic, but it’s a big step forward, and you can travel the rest of the way by perfecting other parts of your conversation.

3. Integrating with external code

I used Heroku to build my app . Using this excellent weather webhook example you can actually deploy a bot to Heroku within minutes. I found this example particularly useful as something I could pick apart to make my own call and response program. The weather webhook takes the information and calls a yahoo app, but ignoring that specific functionality you essentially need the following if you’re working in Python:

    req = request.get_json
    print(json.dumps(req, indent=4))
#process to do your thing and decide what response should be

    res = processRequest(req)
# Response we should receive from processRequest (you’ll need to write some code called processRequest and make it return the below, the weather webhook example above is a good one).
        "speech": “speech we want to send back”,
        "displayText": “display text we want to send back, usually matches speech”,
        "source": "your app name"

# Making our response readable by API.AI and sending it back to the servic

 response = make_response(res)
    response.headers['Content-Type'] = 'application/json'
    return response
# End

As long as you can receive and respond to requests like that (or in the equivalent for languages other than Python), your app and API.AI should both understand each other perfectly — what you do in the interim to change the world or make your response is entirely up to you. The main code I have included is a little different from this because it's also designed to be the step in-between Slack and API.AI. However, I have heavily commented sections like like process_food and the database interaction processes, with both explanation and reading sources. Those comments should help you make it your own. If you want to repurpose my program to work within that five-second window, I would forget about the file called and aim to copy whole processes from, paste them into a program based on the weatherhook example above, and go from there.

Initially I'd recommend trying GSpread to make some changes to a test spreadsheet. That way you’ll get visible feedback on how well your application is running (you’ll need to go through the authorization steps as they are explained here).

4. Using a database

Databases are pretty easy to set up in Heroku. I chose the Postgres add-on (you just need to authenticate your account with a card; it won’t charge you anything and then you just click to install). In the import section of my code I’ve included links to useful resources which helped me figure out how to get the database up and running — for example, this blog post.

I used the Python library Psycopg2 to interact with the database. To steal some examples of using it in code, have a look at the section entitled “synchronous functions” in either the or files. Open_db_connection and close_db_connection do exactly what they say on the tin (open and close the connection with the database). You tell check_database to check a specific column for a specific user and it gives you the value, while update_columns adds a value to specified columns for a certain user record. Where things haven’t worked straightaway, I’ve included links to the pages where I found my solution. One thing to bear in mind is that I’ve used a way of including columns as a variable, which Psycopg2 recommends quite strongly against. I’ve gotten away with it so far because I'm always writing out the specific column names elsewhere — I’m just using that method as a short cut.

5. Processing outside of API.AI’s five-second window

It needs to be said that this step complicates things by no small amount. It also makes it harder to integrate with different applications. Rather than flicking a switch to roll out through API.AI, you have to write the code that interprets authentication and user-specific messages for each platform you're integrating with. What’s more, spoken-only platforms like Google Home and Amazon Alexa don’t allow for this kind of circumvention of the rules — you have to sit within that 5–8 second window, so this method removes those options. The only reasons you should need to take the integration away from API.AI are:

  • You want to use it to work with a platform that it doesn’t have an integration with. It currently has 14 integrations including Facebook Messenger, Twitter, Slack, and Google Home. It also allows exporting your conversations in an Amazon Alexa-understandable format (Amazon has their own similar interface and a bunch of instructions on how to build a skill — here is an example.
  • You are processing masses of information. I’m talking really large amounts. Some flight comparison sites have had problems fitting within the timeout limit of these platforms, but if you aren’t trying to process every detail for every flight for the next 12 months and it’s taking more than five seconds, it’s probably going to be easier to make your code more efficient than work outside the window. Even if you are, those same flight comparison sites solved the problem by creating a process that regularly checks their full data set and creates a smaller pool of information that’s more quickly accessible.
  • You need to send multiple follow-up messages to your user. When using the API.AI integration it’s pretty much call-and-response; you don’t always get access to things like authorization tokens, which are what some messaging platforms require before you can automatically send messages to one of their users.
  • You're working with another program that can be quite slow, or there are technical limitations to your setup. This one applies to Vietnambot, I used the GSpread library in my application, which is fantastic but can be slow to pull out bigger chunks of data. What’s more, Heroku can take a little while to start up if you’re not paying.

I could have paid or cut out some of the functionality to avoid needing to manage this part of the process, but that would have failed to meet number 4 in our original conditions: It had to be possible to adapt the skeleton of the process for much more complex business cases. If you decide you’d rather use my program within that five-second window, skip back to section 2 of this post. Otherwise, keep reading.

When we break out of the five-second API.AI window, we have to do a couple of things. First thing is to flip the process on its head.

What we were doing before:

User sends message -> API.AI -> our process -> API.AI -> user

What we need to do now:

User sends message -> our process -> API.AI -> our process -> user

Instead of API.AI waiting while we do our processing, we do some processing, wait for API.AI to categorize the message from us, do a bit more processing, then message the user.

The way this applies to Vietnambot is:

  1. User says “I want [food]”
  2. Slack sends a message to my app on Heroku
  3. My app sends a “swift and confident” 200 response to Slack to prevent it from resending the message. To send the response, my process has to shut down, so before it does that, it activates a secondary process using "tasks."
  4. The secondary process takes the query text and sends it to API.AI, then gets back the response.
  5. The secondary process checks our database for a user name. If we don’t have one saved, it sends another request to API.AI, putting it in the “we don’t have a name” context, and sends a message to our user asking for their name. That way, when our user responds with their name, API.AI is already primed to interpret it correctly because we’ve set the right context (see section 1 of this post). API.AI tells us that the latest message is a user name and we save it. When we have both the user name and food (whether we’ve just got it from the database or just saved it to the database), Vietnambot adds the order to our sheet, calculates whether we’ve reached the order minimum for that day, and sends a final success message.

6. Integrating with Slack

This won’t be the same as integrating with other messaging services, but it could give some insight into what might be required elsewhere. Slack has two authorization processes; we’ll call one "challenge" and the other "authentication."

Slack includes instructions for an app lifecycle here, but API.AI actually has excellent instructions for how to set up your app; as a first step, create a simple back-and-forth conversation in API.AI (not your full product), go to integrations, switch on Slack, and run through the steps to set it up. Once that is up and working, you’ll need to change the OAuth URL and the Events URL to be the URL for your app.

Thanks to github user karishay, my app code includes a process for responding to the challenge process (which will tell Slack you’re set up to receive events) and for running through the authentication process, using our established database to save important user tokens. There’s also the option to save them to a Google Sheet if you haven’t got the database established yet. However, be wary of this as anything other than a first step — user tokens give an app a lot of power and have to be guarded carefully.

7. Asynchronous processing

We are running our app using Flask, which is basically a whole bunch of code we can call upon to deal with things like receiving requests for information over the internet. In order to create a secondary worker process I've used Redis and Celery. Redis is our “message broker”; it makes makes a list of everything we want our secondary process to do. Celery runs through that list and makes our worker process do those tasks in sequence. Redis is a note left on the fridge telling you to do your washing and take out the bins, while Celery is the housemate that bangs on your bedroom door, note in hand, and makes you do each thing. I’m sure our worker process doesn’t like Celery very much, but it’s really useful for us.

You can find instructions for adding Redis to your app in Heroku here and you can find advice on setting up Celery in Heroku here. Miguel Grinberg’s Using Celery with Flask blog post is also an excellent resource, but using the exact setup he gives results in a clash with our database, so it's easier to stick with the Heroku version.

Up until this point, we've been calling functions in our main app — anything of the form function_name(argument_1, argument_2, argument_3). Now, by putting “tasks.” in front of our function, we’re saying “don’t do this now — hand it to the secondary process." That’s because we’ve done a few things:

  • We’ve created which is the secondary process. Basically it's just one big, long function that our main code tells to run.
  • In we’ve included Celery in our imports and set our app as celery.Celery(), meaning that when we use “app” later we’re essentially saying “this is part of our Celery jobs list” or rather “ will only do anything when its flatmate Celery comes banging on the door”
  • For every time our main process asks for an asynchronous function by writing tasks.any_function_name(), we have created that function in our secondary program just as we would if it were in the same file. However in our secondary program we’ve prefaced with “@app.task”, another way of saying “Do wash_the_dishes when Celery comes banging the door yelling wash_the_dishes(dishes, water, heat, resentment)”.
  • In our “procfile” (included as a file in my code) we have listed our worker process as

All this adds up to the following process:

  1. Main program runs until it hits an asynchronous function
  2. Main program fires off a message to Redis which has a list of work to be done. The main process doesn’t wait, it just runs through everything after it and in our case even shuts down
  3. The Celery part of our worker program goes to Redis and checks for the latest update, it checks what function has been called (because our worker functions are named the same as when our main process called them), it gives our worker all the information to start doing that thing and tells it to get going
  4. Our worker process starts the action it has been told to do, then shuts down.

As with the other topics mentioned here, I’ve included all of this in the code I’ve supplied, along with many of the sources used to gather the information — so feel free to use the processes I have. Also feel free to improve on them; as I said, the value of this investigation was that I am not a coder. Any suggestions for tweaks or improvements to the code are very much welcome.


As I mentioned in the introduction to this post, there's huge opportunity for individuals and organizations to gain ground by creating conversational interactions for the general public. For the vast majority of cases you could be up and running in a few hours to a few days, depending on how complex you want your interactions to be and how comfortable you are with coding languages. There are some stumbling blocks out there, but hopefully this post and my obsessively annotated code can act as templates and signposts to help get you on your way.

Grab my code at GitHub

Bonus #1: The conversational flow for my chat bot

This is by no means necessarily the best or only way to approach this interaction. This is designed to be as streamlined an interaction as possible, but we’re also working within the restrictions of the platform and the time investment necessary to produce this. Common wisdom is to create the flow of your conversation and then keep testing to perfect, so consider this example layout a step in that process. I’d also recommend putting one of these flow charts together before starting — otherwise you could find yourself having to redo a bunch of work to accommodate a better back-and-forth.

Bonus #2: General things I learned putting this together

As I mentioned above, this has been a project of going from complete ignorance of coding to slightly less ignorance. I am not a professional coder, but I found the following things I picked up to be hugely useful while I was starting out.

  1. Comment everything. You’ll probably see my code is bordering on excessive commenting (anything after a # is a comment). While normally I’m sure someone wouldn’t want to include a bunch of Stack Overflow links in their code, I found notes about what things portions of code were trying to do, and where I got the reasoning from, hugely helpful as I tried to wrap my head around it all.
  2. Print everything. In Python, everything within “print()” will be printed out in the app logs (see the commands tip for reading them in Heroku). While printing each action can mean you fill up a logging window terribly quickly (I started using the Heroku add-on LogDNA towards the end and it’s a huge step up in terms of ease of reading and length of history), often the times my app was falling over was because one specific function wasn’t getting what it needed, or because of another stupid typo. Having a semi-constant stream of actions and outputs logged meant I could find the fault much more quickly. My next step would probably be to introduce a way of easily switching on and off the less necessary print functions.
  3. The following commands: Heroku’s how-to documentation for creating an app and adding code is pretty great, but I found myself using these all the time so thought I’d share (all of the below are written in the command line; type cmd in on Windows or by running Terminal on a Mac):
    1. CD “””[file location]””” - select the file your code is in
    2. “git init” - create a git file to add to
    3. “git add .” - add all of the code in your file into the file that git will put online
    4. “git commit -m “[description of what you’re doing]” “ - save the data in your git file
    5. “heroku git:remote -a [the name of your app]” - select your app as where to put the code
    6. “git push heroku master” - send your code to the app you selected
    7. “heroku ps” - find out whether your app is running or crashed
    8. “heroku logs” - apologize to your other half for going totally unresponsive for the last ten minutes and start the process of working through your printouts to see what has gone wrong
  4. POST requests will always wait for a response. Seems really basic — initially I thought that by just sending a POST request and not telling my application to wait for a response I’d be able to basically hot-potato work around and not worry about having to finish what I was doing. That’s not how it works in general, and it’s more of a symbol of my naivete in programming than anything else.
  5. If something is really difficult, it’s very likely you’re doing it wrong. While I made sure to do pretty much all of the actual work myself (to avoid simply farming it out to the very talented individuals at Distilled), I was lucky enough to get some really valuable advice. The piece of advice above was from Dominic Woodman, and I should have listened to it more. The times when I made least progress were when I was trying to use things the way they shouldn’t be used. Even when I broke through those walls, I later found that someone didn’t want me to use it that way because it would completely fail at a later point. Tactical retreat is an option. (At this point, I should mention he wasn’t the only one to give invaluable advice; Austin, Tom, and Duncan of the Distilled R&D team were a huge help.)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

2017-09-20T00:06:00+00:00 R0bin_L0rd The Moz Blog More Money Must Come In Than Go Out
Allow me to tell you a brief story.

Last summer (2016), my family drove US-2 across Northern Montana. This is a part of the country that has largely been abandoned by the modern world. That being said, there were two industries that were thriving.
  1. Hospitals and Urgent Care Centers.
  2. Drive-Ins

But overall, it was clear what was happening. As industry left the small towns along US-2, good paying jobs were hard to come by. And without good paying jobs, there wasn't a lot of money to spread around ... and this caused local businesses to close ... and this drove the small towns into even more trouble. 

Now throw Amazon into the mix - they'll deliver anything in a couple of days to Cut Bank ... so money that would have been spent locally instead goes to Amazon, causing Downtown Seattle to thrive.

Who has a plan to revive a small town along US-2 in Northern Montana?
  • The plan would require more money to come into the town than leaves the town.
This brings us to Modern Catalog Marketing ... a situation not fundamentally different than the small towns dotting US-2 in Northern Montana.
  • On average, a cataloger pays $25 - $35 of every $100 spent by a customer to various vendors ... with about two-thirds of that going to paper, printing, and postage.
  • The printing industry (in particular) uses the money to pay their onerous long-term debt load.
So the cataloger gets to keep $5 of every $100 the customer spends ... while the vendor community gets to keep $25 - $35 of every $100 that a catalog customer spends. More money is going out than is coming in (not really, but you get the point ... too much money is going out).

One of the reasons why Nordstrom has been successful over the past 15 years and has been able to hang in there when business is bad is because they spend very little money with marketing vendors. More money comes in than goes out.

Even Amazon, with advertising you cannot miss, is spending less than 7% of sales on advertising (click here).

How in the heck can you compete with Amazon when you spend an additional $20 of every $100 you take in with print-centric vendors ... while Amazon and other online marketers do not have to do that?

In other words, what could you do if you had an additional $20 of every $100 spent by customers available to you?

This is why I continually ask you to consider the low-cost / no-cost customer acquisition programs that I share on this blog ... programs that my online clients gladly leverage to grow.

Tomorrow, I will share with you scenarios of what business could look like if spending was different. You'll probably disagree with me ... that's fine ... send me an email message ( and I will print your rebuttal provided it is logical and fact-based. You can 100% disagree with me, and that's fine, the industry needs to hear your opinion.
2017-09-20T03:10:06+00:00 Kevin Hillstrom Kevin Hillstrom: MineThatData Adobe’s 2016 Holiday Shopping Predictions Bring ‘Good Tidings’ for 2017

Retailers know the holiday shopping season ramps up long before the rest of us start sporting our festive sweaters and scarves. That’s why it’s never too early to think ahead.

In the fall of 2016, Adobe Digital Insights’ Holiday Shopping Predictions highlighted trends retailers should have expected to see as they geared up for last year’s holiday season. Although we’ll soon be releasing our 2017 report for the coming holiday season, it’s worth a look in the rearview mirror to see how our 2016 predictions stacked up to reality, and what that means for your marketing strategy as you prepare for the 2017 season.

Our findings revealed that there were still some large gaps between being aware of trends and successfully leveraging that awareness. With record-breaking sales last year, the retail industry, especially e-commerce, came out ahead. But analysts are anticipating that 2017 retail holiday shopping sales will be lower than last year’s — dropping from a 4.8 percent growth in sales in 2016 to only a 2 percent growth in 2017. This means retailers will have to work even harder this year to remain competitive.

By looking back at our 2016 predictions, and aligning those with real-world results, we gained some important insights that we believe will still hold true for this upcoming season.

1. Online Shopping.

Prediction:  While we projected that consumers would increase their online purchases in 2016, we also predicted that the motivations for online purchases would shift. Consumers would move from looking for great deals (–11 percent from 2015) to wanting more convenience — avoiding traffic and long lines (+4 percent from 2015), and being able to shop from work (+7 percent from 2015).

Reality: Online shopping saw an 11 percent year-over-year growth during the holiday season. Interestingly, much of the growth in revenue happened at the end of the holiday season as shoppers took advantage of later shipping options. Retailers who experienced low growth, may be reaching a visit plateau as fewer net new customers shop online.

What it means for 2017: For traditional retailers, it’s important to note that despite growth in online sales, most customers still favor in-store purchasing, supplemented by online shopping. For pure play retailers, the growth in online shopping is good news. Investing more in mobile and display ads (see below) will be important to increasing your growth metrics. This means that providing a seamless experience between your online and offline channels is essential. So think outside the box when developing your holiday season campaigns to create a seamless experience between your online and offline brand. Given the growth in last-minute revenue that we saw last year, there’s also an opportunity for retailers to take advantage of last-minute gift buying with convenience-oriented promotions like click-and-collect or express shipping.

2. Mobile Shopping.

Prediction: In 2016, we anticipated that mobile would play a major role in holiday shopping. In fact, we predicted that it would overtake desktop for the first time in terms of driving visits to a website during the holiday season.

Reality: While desktop remained the biggest driver of visits and sales — at 50 and 73 percent, respectively — for the first time ever, there were six days in which smartphone traffic surpassed desktop traffic.

What it means for 2017: To leverage the continued growth in mobile shopping, retailers need to enhance their mobile sites. Mobile web browsing remains the dominant interaction between customers and users, but it still isn’t driving conversion as hoped. Retailers should make it easier to convert visitors to customers by allowing shoppers to check out as guests. Mobile apps can also improve the experience, especially when retailers use a platform that offers flexible APIs to connect their mobile apps with other content management systems, product databases, and ERP or CRM systems. This will make it easier to design a mobile experience that is optimized to allow customers to search for specific products.

3. Display Ads.

Prediction: Last year, we noted that digital display advertising was the best way to lead people to a specific discounted product or popular category. In part, this prediction was based on a downward trend of consumers’ preference for receiving SMS or text messages as ways to alert them to sales.

Reality: Search advertising boosted Black Friday performance, which increased 16 percent year-over-year, reinforcing the fact that display ads are the best way to deliver personalized ads that drive shoppers toward a specific product or category search.

What it means for 2017: Retailers, particularly those in e-commerce, should be entering the holiday season with a strong strategy for paid media campaigns. To succeed, you must use third-party platforms as part of your paid advertising strategy. By harnessing the data of other retailers, you can enhance targeting in your display ads, and complement your Google and Amazon tactics to ensure the highest impact of your paid search dollars. These paid ads can drive traffic to physical stores, as well as online venues. Also, don’t forget about dynamic display ads that can target customers based on products they have previously browsed on your website. These types of ads are getting easier to assemble and deliver. But, more critically, the competition for shoppers’ attention and dollars continues to escalate.

With the holiday shopping frenzy only a few short months away, retailers need a marketing strategy that covers all the bases, and doesn’t leave any revenue stone unturned. We anticipate that many of the trends we saw last year will continue into 2017, but we’ll be as excited as you to see what Adobe Digital Insights’ Holiday Shopping Predictions for 2017 reveals. Stay tuned.

To learn more visit us here.

The post Adobe’s 2016 Holiday Shopping Predictions Bring ‘Good Tidings’ for 2017 appeared first on Digital Marketing Blog by Adobe.

2017-09-20T08:30:20+00:00 Adobe Retail Team Omniture: Industry Insights Marketers have more data than ever, so why aren’t they better at experimentation?

The philosopher of science Paul Feyerabend famously wrote that “the only principle that does not inhibit progress is: anything goes.”


2017-09-20T10:47:17+00:00 Frederic Kalinke Posts from the Econsultancy blog Are You Using Google Data Studio? You Probably Should Be

A few weeks back, I wrote a post about creating killer reports.  I noted a few reporting platforms worth checking out. They all have their pros and cons but for paid search marketers, the one with the most value is probably Google Data Studio.

If you haven’t had a chance to check it out, it’s well worth a test drive.  Not only is Google Data Studio free, with all of the data that Google has access to, it makes for a really nice platform with a lot to offer.

Let’s talk about the beauty that is Google Data Studio.

Save Time on Reporting

One of the most cumbersome things about reporting is, without a doubt, the time that it takes to compile all of the data. With Google Data Studio, you simply create the template and set the time frame. If you want to look at a different time frame, no problem – you just update the date range of the report and it will populate.

You can also set it up so that it is looking at a rolling date range – instead of a set date range so that you’re automatically reviewing the date range that is of interest.

In short: you can easily set it up to pull in routine data pulls so that you don’t have to manually update reports on a regular basis. One of the many benefits – is that you can make the report as big or small as is needed to suit your needs, with no concerns about ongoing maintenance and upkeep.

Although I don’t suggest trying to boil the ocean, or including metrics that don’t truly matter because I believe that waters down and ultimately devalues the report, if the timing was previously a factor in reporting, now it doesn’t have to be.

Branded, Professional Looking Reports

No fuss reporting – must be hideous, right? Nope! Data Studio has a great report structure with the ability to leverage existing templates or create your own. There are essentially two color schemes: light and dark – pictured below. However, you have the optimization to update the colors within the themes – even using hex codes to use precisely your brand’s colors.

With their templates, you also have the option to upload a logo in the top corner. If you create your own template, you can put your logo wherever you’d like.

Multi-Channel Reporting

You may be wondering why you would use Google Data Studio when AdWords has some really nice reporting features – and a dashboarding tool in Beta, as well. There are a time and place for both, but the big draw to Google Data Studio is that you can pull in multiple channels through Google Analytics. Granted, you may not have all of the data in Google Analytics that you want to report on (such as spend data for other channels) but, luckily for you, Google Data Studios integrates with Google Sheets so you can update your Google Sheet to incorporate any data that wouldn’t exist in Google Analytics.  (Hey, a few manual data entries is still way better than the manual creation of an entire report, right?)

There are also a lot of pre-built connectors that exist to integrate additional sources. Some are free and some have a nominal fee. If you have a developer at hand, you can build your own connectors.

Also, if you’re a person that likes to print reports, Google Data Studio is based upon pages – so they PDF and print really nicely. That’s not the case for a lot of multi-channel reporting dashboards.

Visualize Your Impact

Google Data Studio makes visualizing your reports super simple. We covered this a bit above but it warrants going into even more detail.  There are so many options for organizing your data, including:

  • Charts and tables
  • Geo-maps
  • Line graphs
  • Bar charts
  • Combo charts (data in a bar graph, with additional data in a line graph over top of the bar chart)
  • Area maps
  • Pie Charts
  • Scorecards
  • Scatter charts
  • Bullet charts

You also have the ability to add images and text boxes.  Paired with the ability to report on metrics from multiple channels with the same filtering options that you’d have in Google Analytics – these can make for some really powerful reports.

Where Google Data Studio Lacks

Google Data Studio is a really great option for most paid search marketers. As I mentioned above, there are some great options for pulling in data from other sources.

However, if your organization’s focus is largely on metrics that reside outside of Google Data Studio, it could become a cumbersome process to pull that data in – especially if you don’t have a developer resource handy. In those situations, there are other reporting options with native integrations into more platforms than Google Data Studio has, such as connections to CRMs, marketing automation platforms, etc.

In any case, it is worth a look to see if Google Data Studio could benefit your organization!

Have you checked out Google Data Studio? What’s your favorite feature? We’d love to hear your thoughts in the comments!

About the Author:

Amy has built and implemented multichannel digital strategies for a variety of companies spanning several industry verticals from start-ups and small businesses to Fortune 500 and global organizations. Her expertise includes e-commerce, lead generation, and localized site-to-store strategies. Amy is currently the Director of Digital Marketing & MarTech at ZirMed.

2017-09-20T12:08:28+00:00 Amy Bishop The Clix Marketing Blog Restoration Hardware bid on 3,200 keywords, found 98% of its PPC sales came from just 22 brand terms

For years, many marketers have spent a lot of time and money trying to find the perfect keywords for their paid search campaigns.

In some cases, marketers are bidding on thousands of keywords. But could it be for naught?


2017-09-20T13:15:00+00:00 Patricio Robles Posts from the Econsultancy blog How to Choose the Right Martech and PRtech Solutions

With nearly 5,000 solutions in the B2B marketing technology and PR technology landscape, it can be daunting to decide which technology your company should use. So here are four important areas of consideration and related questions to help you identify the best solutions for your organization. Read the full article at MarketingProfs

2017-09-20T14:00:00+00:00 Marketing Profs - Marketing Concepts, Strategies, Articles, Research, Events and Commentaries New: Streaming Google Analytics Data for BigQuery
Streaming data for BigQuery export is here.

Today we're happy to announce that data for the Google Analytics BigQuery export can be streamed as often as every 10 minutes into Google Cloud.

If you're a Google Analytics 360 client who wants to do current-day analysis, this means you can choose to send data to BigQuery up to six times per hour for almost real-time analysis and action. That’s a 48x improvement over the existing three-times-per-day exports.

What can I do with streaming data delivery?
Many businesses use faster access to their data to identify and engage with clients who show an intent to convert.

For example, it's well known that a good time to offer a discount to consumers is just after they've shown intent (like adding a product to their cart) but then abandoned the conversion funnel. An offer at that moment can bring back large numbers of consumers who then convert. In a case like this, it's critical to use the freshest data to identify those users in minutes and deploy the right campaign to bring them back.

More frequent updates also help clients recognize and fix issues more quickly, and react to cultural trends in time to join the conversation. BigQuery is an important part of the process: it helps you join other datasets from CRM systems, call centers, or offline sales that are not available in Google Analytics today to gain greater context into those clients, issues, or emerging trends.

When streaming data is combined with BigQuery's robust programmatic and statistical tools, predictive user models can capture a greater understanding of your audience ― and help you engage those users where and when they’re ready to convert. That means more sales opportunities and better returns on your investment.

What's changing?
Those who opt in to streaming Google Analytics data into BigQuery will see data delivered to their selected BigQuery project as fast as every 10 minutes.

Those who don't opt-in will continue to see data delivered just as it has been, arriving about every eight hours.

Why is opt-in required?
The new export uses Cloud Streaming Service, which costs a little extra: $0.05 per GB (that is, "a nickel a gig"). The opt-in is our way of making sure nobody gets surprised by the additional cost. If you don't take any action, your account will continue to run as it does now, and there will be no added cost.

What data is included?
Most data sent directly to Google Analytics is included. However, data pulled in from other sources like AdWords and DoubleClick, also referred to as “integration sources”, operate with additional requirements like fraud detection. That means that this data is purposefully delayed for your benefit and therefore exempt from this new streaming functionality.

For further details on what is supported or not supported, please read the help center article here.

How do I get started?
You can start receiving the more frequent data feeds by opting in. To do so, just visit the Google Analytics BigQuery linking page in the Property Admin section and choose the following option:

You can also visit our Help Center for full details on this change and opt-in instructions.

Posted by Breen Baker, on behalf of the Google Analytics team
2017-09-20T16:00:01+00:00 Adam Singer Google Analytics Blog Tesla’s remote upgrades to its vehicles during Hurricane Irma are the future of tech

We haven’t seen entire new hardware functions being made available through software upgrades — that’s going to change.

A version of this essay was originally published at Tech.pinions, a website dedicated to informed opinions, insight and perspective on the tech industry.

One of the most appealing aspects of many tech-based products is their ability to be improved after they’ve been purchased — just this morning, Apple released a flotilla of updates, turning up its iPhone software to iOS 11, its Apple Watch to watchOS 4, and Apple TV to tvOS 11, with a Mac OS update called High Sierra due on Monday. Whether it’s adding new features, making existing functions work better, or even just fixing the inevitable bugs or other glitches that often occur in today’s advanced digital devices, the idea of upgrades is generally very appealing.

With some tech-based products, you can add new hardware — such as plugging a new graphics card into a desktop PC — to update a device. Most upgrades, however, are software-based. Given the software-centric nature of everything from modern cars to smart speakers to, of course, smartphones and other common computing devices, this is by far the most common type of enhancement that our digital gadgets receive.

The range of software upgrades made for devices varies tremendously — from very subtle tweaks that are essentially invisible to most users, through dramatic feature enhancements that enable capabilities that weren’t there before the upgrade. In most cases, however, you don’t see entire new hardware functions being made available through software upgrades. I’m starting to wonder, however, if that concept is going to change.

The event that triggered my thought process was Tesla’s recent decision to remotely and temporarily enhance the battery capacity, and therefore driving range, of its Tesla vehicles for owners in Florida who were trying to escape the impact of the recent Hurricane Irma. Tesla has offered software-based hardware upgrades — not only to increase driving range but to turn on its autonomous driving features — for several years.

Nevertheless, it’s not widely known that several differently priced models of Tesla’s cars are identical from a hardware perspective, but differ only in the software loaded into the car. Want the S75 or the S60? There’s an $8,500 price and 41-mile range difference between the two, but the only actual change is nothing more than a software enablement of batteries that exist in both models. Similarly, the company’s AutoPilot feature is $2,500 on a new car, but can be enabled via an over-the-air software update on most other Tesla cars for $3,000 after the purchase.

In the case of the Florida customers, Tesla was clearly trying to do a good thing (though I’m sure many were frustrated that the feature was remotely taken away almost as quickly as it had been remotely enabled), but the practice of software-based hardware upgrades certainly raises some questions. On the one hand, it’s arguably nice to have the ability to “add” these hardware features after the fact (even with the post-purchase $500 fee above what it would have cost “built-in” to a new car), but there is something that doesn’t seem right about intentionally disabling capabilities that are already there.

Clearly, Tesla’s policies haven’t exactly held back enthusiasm for many of their cars, but I do wonder if we’re going to start seeing other companies take a similar approach on less-expensive devices as a new way to drive profits.

In the semiconductor industry, the process of “binning” — in which chips of the same design are separated into different “bins” based on their performance and thermal characteristics, and then marketed as having different minimum performance requirements — has been going on for decades. In the case of chips, however, there isn’t a way to upgrade them — except perhaps with overclocking, where you try to run a chip faster than what its minimum stated frequency is — and there’s no guarantee that it will work. The nature of the semiconductor manufacturing process simply creates these different thermal and frequency ranges, and vendors have intelligently figured out a way to create different models based on the variations that occur.

In other product categories, however, I wouldn’t be surprised if we start to see more of these software-based hardware upgrades. The benefits of building one hardware platform and then differentiating solely based on software can make economic sense for products that are made in very large quantities. The ability to source identical parts and develop manufacturing processes around a single design can translate into savings for some vendors, even if the component costs are a bit higher than they might otherwise be with a variety of different configurations or designs.

The truth is that it is notoriously challenging for tech hardware businesses to make much money. With few exceptions, the profit-margin percentages for tech hardware are in the low single digits, and many companies actually lose money on hardware sales. Most hope to make it up via accessories or other services. As a result, there’s more willingness to experiment with business models, particularly as we see the lifespans for different generations of products continue to shrink.

Ironically, though, after years of charging for software upgrades, we’ve seen most companies start to offer their software upgrades for free. As a result, I think there’s more reticence for consumers and other end users to pay for traditional software-only upgrades. In the case of these software-enabled hardware upgrades, however, we could start to see the pendulum swing back the other way, as virtually all of these upgrades have a price associated with them. In the case of Tesla cars, in fact, it’s a very large cost. Some have argued that this is because Tesla sees itself as more of a software company than a hardware one, but I think that’s a difficult concept for many to accept. Plus, for many traditional hardware companies who may want to try this model, the positioning could be even more difficult.

Despite these concerns, I have a feeling that the software-based hardware upgrade is an approach we’re going to see a number of companies try variations on for several years to come. There’s no question that it will continue to come with a reasonable share of controversies (and risks — if the software upgrades become publicly available via frustrated hackers), but I think it’s something we’re going to have to get used to — like it or not.

Bob O’Donnell is the founder and chief analyst of Technalysis Research LLC, a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. Reach him @bobodtech.

2017-09-19T19:35:01+00:00 Bob O\'Donnell Recode - Front Page Google’s AI head says super-intelligent AI scare stories are stupid

The AI apocalypse: disconcerting to imagine, but fun to talk about.

Well, that’s if Silicon Valley’s leaders are anything to go by. Tesla’s Elon Musk has been banging the drum about the dangers of super-intelligent AI for years, while Facebook’s Mark Zuckerberg thinks such doomsday scenarios are overblown. Now Google’s AI chief John Giannandrea is getting in on the action, siding with the Zuck in recent comments made at the TechCrunch Disrupt conference in San Francisco.

"There’s a huge amount of unwarranted hype around AI right now," said Giannandrea, according to a report from Bloomberg. "This leap into, ‘Somebody is going to produce a superhuman intelligence and then there’s going to be all these ethical issues’ is unwarranted and...

Continue reading…

2017-09-20T08:34:02+00:00 James Vincent The Verge - All Posts We are the ones making the iPhone and Pixel more expensive

This year has seen a big upgrade in quality from most phone companies: Samsung’s new Galaxy Note doesn’t explode, Apple’s iPhone has a radical redesign, the OnePlus 5 is lovely, and LG’s V30 is shaping up to be a strong contender. If you love technology, you love it for precisely this inexorable march toward better, faster, and prettier devices. But one thing that’s different about the best smartphones of 2017 is that the price of admission is going up.

  • The Huawei P9 cost £449 in the UK in 2016, but this year’s P10 starts at £569. You get more, but you pay more as well. (+26%)
  • The OnePlus 3 was $399 in 2016, but the 2017 OnePlus 5’s starting price is $479. (+20%)
  • The Galaxy Note 7 was $849 in 2016, but now the Galaxy Note 8 costs $930...

Continue reading…

2017-09-20T14:58:01+00:00 Vlad Savov The Verge - All Posts How to Record Your iPhone or iPad Screen in iOS 11

iOS 11 comes with a handy new Control Center function that allows you to record what you're doing on your screen. It's great if you want to capture gameplay, walk someone through a tutorial in an app, demonstrate a bug, and more, and it's available on iPhones and iPads running iOS 11.

Enabling Screen Recording

If you don't have the screen recording icon in Control Center, you can add it in the Settings app.

  1. Open the Settings app.

  2. Choose Control Center.

  3. Select "Customize Controls."

  4. Tap the + button next to "Screen Recording" to add it to the "Include" section.

Starting a Recording

  1. Bring up the Control Center.

  2. Tap the icon for screen recording. It's two nested circles.

  3. Your iPhone or iPad will start recording video of your screen automatically following a three second countdown.

While screen recording is turned on, a red bar is plastered across the top of the display so it's clear when you're recording and when you're not.

Ending a Recording

  1. Open Control Center again.

  2. Tap the screen recording icon.

  1. Tap the red bar at the top of the screen.

  2. Confirm that you want to end recording.
The video you made is then saved to the Photos app.

Accessing Screen Recording Options

There are a few options that are available when making a screen recording, which can be accessed directly in the Control Center. To bring up these options, simply 3D Touch on the screen recording icon.

From this menu, you can start a screen recording and toggle microphone audio on or off. These are the only options that are available for the feature -- it's fairly basic.

Related Roundup: iOS 11

Discuss this article in our forums

2017-09-20T15:17:10+00:00 Juli Clover MacRumors: Mac News and Rumors - All Stories Semiotic, a visualization framework

Elijah Meeks released Semiotic into the wild. It’s a framework that allows quick charts but provides flexibility for custom stuff.

Semiotic is a React-based data visualization framework. You can see interactive documentation and examples here. It satisfies the need for reusable data visualization, without committing to a static set of charting types. It came out of a need for a data visualization framework that let us make simple charts quickly without committing ourselves to using only those charts. Semiotic incorporates the design and functionality of more complex data visualization methods as a response to the conversation these simple charts might begin.

Saving this for later, if just for the sketchy fills.

Tags: ,

2017-09-15T08:17:45+00:00 Nathan Yau FlowingData Experience is Everything: How B2B Companies Are Competing with Marketplaces

Evolving forces are reshaping the B2B landscape, leaving many marketing teams to rethink the way they talk to their customers. Forrester Research predicts B2B e-commerce sales in the United States will top $1.1 trillion by 2020, giving B2B companies more than a trillion reasons to embrace the trend toward consumerization.

There’s just one problem — online behemoths like Amazon Business and Alibaba make intimidating competitors because they have set the standard for creating trustworthy online experiences backed by great functionality and responsive customer service. But B2B companies can differentiate themselves from third-party players by delivering optimal customer experiences. That means learning how to leverage direct-to-customer platforms without negatively affecting existing distribution partners.

By investing in state-of-the art digital asset and content management technology, as well as analytics, targeting, and optimization solutions, your company can set itself apart from large third-party marketplaces, while giving your customers a reason to keep coming back to you. Putting the digital technology capable of delivering next-level customer experiences in place is precisely what will determine the survival of the fittest in the rapidly evolving B2B ecosystem.

Diversifying your B2B strategy.

It’s easy to think of Amazon Business and Alibaba as competition, but there may be some value in considering the old adage, “if you can’t beat ‘em, join ‘em.”

Going toe-to-toe with big marketplaces through your own online channel is as direct as it gets, but you can turn this competition into an opportunity.

“There are different ways you can experiment with B2B marketplaces,” says Tristan Saw, senior director of strategy and consulting at SapientRazorfish, Adobe Digital Marketing Partner of the Year in 2015 and 2016. “The first [step] is to use a site with a broad customer base, like Amazon Business or Alibaba, to test product demand before selling directly to customers through your own site.”

A company’s infrastructure and sales goals should determine how much it leverages third-party partners. Companies already profiting from a direct-to-customer framework may want to augment their existing online marketplace. But companies new to e-commerce might be better served by placing new products on a third-party marketplace to test the market opportunity before investing in the development of a direct channel of their own.

Several key factors play into developing the best strategy, all of which are unique to your brand and marketing. That’s why every company can find value in experimenting with direct channels and third-party marketplaces to see which dynamic works best in the context of their own digital transformation.

The path to a custom-branded marketplace.
Every enterprise-level company should be developing a strategy for competing with third-party marketplaces. If your sights are set on one-upping the heavy hitters, the secret sauce is to create a customer experience your visitors won’t soon forget. That means developing an environment that is simple, intuitive, personalized, and flexible — where B2B buyers can easily navigate from product searches through the purchasing process, and beyond, to aftermarket sales and service.

It also means creating an environment that rivals the purchasing experiences B2B buyers are already familiar with as consumers on state-of-the art retail platforms. After all, every B2B buyer who visits your site, whether you like it or not, is going to be judging each aspect of the purchasing process by comparing it to their own B2C experiences.

“If you’re serious about selling directly online, you will need to deliver a customer experience that drives sales,” says Tristan. “In order to compete, you need to invest in becoming an experience-led business, which means optimizing every digital customer touch point.”

In fact, many B2B companies can benefit from embracing a “frenemy-type” relationship with third-party marketplaces by leveraging them for testing and analytics, and ideas for the customization of your own site.

Use third-party marketplaces to test and analyze. Taking on the big guys isn’t for the faint of heart, and even well-established brands have fallen victim to their overwhelming influence in the marketplace. But there’s no denying the power of big, centralized marketplaces, so go ahead and take advantage of their reach.

Using third-party infrastructure provides a relatively low-risk test bed. By taking advantage of an established marketplace like Amazon Business, you can place your products alongside others to evaluate market demand, determine whether you can meet the needs of your buyers, and shape your own infrastructure development strategy. Once you start analyzing KPIs, such as purchasing history, demographics, and conversions, you’ll have data to help formulate a strategy that defines the best direct-to-customer channels for your business.

Select the best platform given the market opportunity. Should testing and analytics prove there is real value in using large marketplaces to your advantage, you easily can increase your presence, or know how best to build your own e-commerce platform that will meet the specific needs and expectations of your customer base. On the other hand, if demand is low, you’ll know not to invest heavily in your own e-commerce platform — or at least not until you can iterate and optimize your offering and approach to attract the market you need. Finally, there are times you’ll want to take a dual approach and determine how to align your third-party sales strategy with a direct-to-customer approach on your own site.

Weigh the pros and cons of a B2B exchange.
The need to evolve into an organization that can utilize digital tools and techniques to compete with industry rivals may cause some B2B marketers to rush the process. This is a mistake. While each approach to setting your company apart from third-party marketplace giants has its advantages, there are also caveats that must be considered before implementing your strategy.

For starters, there are financial considerations. Developing your own infrastructure requires an investment in time, staffing, and financial resources. “By selling through Amazon, they’re bringing a marketplace to you,” says Tristan. “They’re giving you access to infrastructure — like payment gateways, warehousing, and delivery systems.”

While services such as fulfillment, drop shipping, and distribution may sound like an excellent bargain, they don’t come without a price. Amazon fees vary by contract, but Tristan estimates they’re about 20 percent. “You’re giving up considerable margin to be able to sell through their channel,” he says. “That’s one element of the risk.”

Another risk factor of a shared platform is losing your customers to your competitors. If you’re going to utilize a third-party site, you’re not going to be the only brand on the page offering the same or similar products. That’s why setting yourself apart from the rest of the pack is an absolute necessity. Odds are, any third-party marketplace is going to be flooded with your competitors lurking in sidebars, banner ads, and “customers also viewed” sections — only a click away from your offer. “This dynamic could lead to customers being lost to the lowest bidder,” says Tristan. “If I’m a buyer on Amazon Business, I can shop not only for your products, but your competitors’ products too.”

Competition isn’t the only worry that comes along with using third-party sites. The lack of control you have over customer feedback and reviews means trusting your brand’s reputation to someone else. It’s not uncommon for customer-oriented sites to get the praise for easy returns, while manufacturers get the negative review for selling a subpar product. “That’s the worst-case scenario because you just hinder your brand,” says Tristan. “While reviews provide transparency and social proof for buyers, they can also lead to poor brand perception from negative feedback. You shouldn’t shy away from this though, as it can also be a key input for improving your products.”

Embracing third-party infrastructure is also going to have an effect on other partners, owned portals, and sales teams. Special pricing is difficult through third-party marketplaces, and the lack of price consistency could lead to customer confusion and attrition in the long run. “If I’m a huge conglomerate that can afford to buy in bulk then most companies will offer me a volume-based discount,” says Tristan. “On Amazon, it is often unclear if you will get a discount, and sometimes you have to actively request it from a seller.”

Historically, prices on many third-party sites have varied considerably, but Amazon Business, in particular, is working to implement consistent pricing — another improvement that’s destined to place further pressure on smaller competitors in the near future. For example, Amazon is already experimenting with a pricing structure that eliminates the need to speak with a sales rep.

And this is only the beginning. Other steps to consider when adding another layer of complexity to your sales strategy include maintaining consistency across channels, managing customer relationships, and managing costs specific to each channel.

Adopt a digital foundation that will prioritize customer experience.
While leveraging B2B marketplaces should be part of your marketing mix, if you’re serious about competing with third-party B2B exchanges, you need to invest in your own web infrastructure, in which optimizing customer experience should be the number one priority. That means maintaining a level of relevancy and consistency that follows a vast array of potential buyers across every step of the customer journey. To do that, you’ll need to deploy a digital foundation that collects and analyzes data, creates and publishes content, and helps manage it across digital and offline avenues — transforming how partners and customers engage with your company across every channel.

Few companies know this better than Constellation Energy, which is investing in its own B2B infrastructure in anticipation of a future that may well bring direct competition from B2B marketplace competitors.

Approximately 2.5 million residential, public sector, and business customers rely on Constellation Energy as their energy supplier, each with their own unique set of needs that must be catered to. “We wanted customer relationships to be long-lasting relationships based on value delivered by us,” says Michael Cammon, director of digital marketing at Constellation Energy. “It’s important to take those relationships a step further by truly understanding the problems our customers are trying to solve, and how best to solve them as they relate to energy.”

Any brand looking to set itself apart from larger competitors should take a cue from Constellation Energy by customizing a digital experience platform that will help their marketing teams deliver consistent and memorable messages at scale, no matter the medium.

The sheer size of the energy group at Constellation — with customers ranging from large commercial and industrial organizations to residential and small businesses — meant that any digital transformation effort would have to address problems inherent to large enterprises. “We were very aware that the internal content management system technology we were using in the past was too difficult for a lot of our non-technical content owners to handle,” says Karen Jennings, a digital marketer with Constellation Energy.

One of the challenges for Constellation was integrating legacy software with a digital platform that delivers the tools needed to achieve the company’s marketing goals. “For example, we wanted to look at a scalable system that was easy to use, had an easy to understand vocabulary, and made it easy to manage assets,” says Karen.

By choosing a module-based system with room to grow, Constellation Energy was able to future-proof its digital transformation, while expanding the tools available to content editors using the system. Karen says the company’s content management system choice was based on scalability and growth potential. It also provided a tool that was user-friendly in terms of helping content owners create web pages and manage content, without passing maintenance work downstream.

The result of Constellation’s dedication to its digital transformation takes the customer experience to a whole new level. “For example, we have a team that works in governmental aggregation,” explains Karen. “They work at the municipality level, securing an energy price for everyone that lives in a specific jurisdiction. Now, our team can easily set up landing pages for every community to provide accurate pricing, based on what was negotiated for that community.”

Constellation Energy’s use of an integrated digital asset manager helps the utility company manage content without relying on its IT department, while building highly personalized experiences, custom-fit to meet the needs of a wide array of customers. Whether you’re marketing across third-party sites or your own digital properties, you’ll need an integrated content management system of your own to succeed.

Customer experience is the differentiating factor.
There’s no doubt B2C experiences are influencing B2B design, with marketplaces such as Amazon and Alibaba already fully built to service the needs of the B2B buyer. While mega-exchanges pose a competitive threat for some established B2B companies, notes Tristan, they are vulnerable to companies that can deliver more personalized online experiences. That’s why succeeding in the context of this emerging dynamic means developing the right experience delivery framework for your business. In order to optimize marketing, sales, and support, B2B companies need to understand that optimizing customer experiences on their own websites is the key to remaining competitive.

Whether you’re leveraging a large third-party marketplace, your own B2B e-commerce site, or a hybrid solution, experience is everything. Determine the right digital strategy that will help you create the kind of memorable experiences that your customers will want to come back for, while making sure you’re not negatively affecting your existing distribution channels. With the right approach, you can set your company apart as a customer-centric enterprise, willing to go above and beyond to cater to your customers’ needs, regardless of where you interact with them.

Learn more about Razorshop B2B online, or explore how Adobe is facilitating fluid B2B experiences in high tech and manufacturing.

#B2B Strategy

The post Experience is Everything: How B2B Companies Are Competing with Marketplaces appeared first on Digital Marketing Blog by Adobe.

2017-09-18T08:30:24+00:00 Adobe Manufacturing Team Omniture: Industry Insights Data Studio Connectors: Why & How to Use

We have witnessed many of updates since Data Studio has been launched globally. One of the important updates is the addition of third-party data connectors. With third-party connectors, we can now import data from many of the external marketing channels like Facebook, Bing, DoubleClick..etc.

In this article, we will discuss why you should say goodbye to Google Sheets and hello to third-party data connectors.


The conventional data import method

Before data connectors, Google Sheets were used as a medium to import data from other sources. It is still a workaround practiced by many of the online marketers but is a complex process.

With Google Sheets, you have to gather all your data in a single sheet and have to be careful about each detail, to be able to import data to Google Data Studio successfully.

Another painful workaround with sheets is, you have to make sure the date and time formats are compatible with Data Studio. Following are the four important things that make Google Sheets less useful.

  1. Create a new sheet and gather data

  2. Define a process to update data

  3. Google Data Studio lacks real-time reports

  4. Make sure to make date and time compatible


Data Connectors: The Perfect Sheet Replacement

Google Data Studio introduced third-party connectors to eliminate the need of sheets to import data from other marketing channels. With data connectors, it became a very simple one-step process to visualize data from any source.

Data Studio has now over 10 partners, where Funnel made a super connector that collects data from 250+ sources and send to 100+ destinations.  

Data Connectors Pricing

All of the data connectors are paid but most of them offer free trial period to test the services before subscription. I tested Supermetrics Facebook Ads connector and was able to create my first facebook ads report in Data Studio. Below is a glimpse

facebook data studio report

This is pretty similar to ecommerce template layout but I am sure I would be able to tweak this to make it more related to ads report.


How to use data connectors

Connecting a third party data source is similar to how we connect native data sources. However, connecting a third party data connector can be a pretty intimidating process at first for most of the people like me.

Let me show you how to connect Facebook Ads connector by Supermetrics to build a report like I did. Follow the step by step process:

Step1: Create a new report or open an existing report to view you want to add the data source

Step2: click on CREATE NEW DATA SOURCE button on the right menu when created a new report or click Resources >> Manage added and then click Add a Data Source link for any existing report.

add data source

Step3: From the left menu scroll down to see the community connectors and click EXPLORE CONNECTORS button.

add data connector

Step4: Here you will see a list of available third-party connectors. Look for Facebook Ads by Supermetrics connector and click ADD CONNECTOR link.

facebook ads connector

Step5: Authorize Data Studio to connect to use the community connector. Click Authorize button

authorize data connectors

Step6: Now authorize Facebook Ads connector

authorize facebook connector

Follow the steps ahead to sign into facebook account
Step7: select ad account, conversion window and click on CONNECT button on top right.

select ad account

Step8: click on Add to Report button to import metrics and dimensions

add to report

Once you follow the steps successfully you can now create your report. Explore more about Supermetrics connectors and how you can use them together to build a single report for all your marketing data. 

If you are new to Data Studio, follow our step by step Data Studio tutorial to create an awesome report.

The beauty of Data Studio is that you can add multiple data sources to the same report. Now, with data connectors, you can see all your marketing cost data in a single report i.e Facebook, Twitter, Adwords..etc.



In a nutshell, importing your marketing data was not an impossible but a time-consuming and complex process which is now doable in a matter of seconds with third party data connectors. If you have used third party connector by now, please do buzz the comments with your experience.

2017-09-18T13:57:54+00:00 Noman Karim Blog - MarketLytics The Bad News Advantage

Telling your employees the truth — even when it’s bad — makes you a better leader. Here’s why…

Sharing bad news is a good thing.

As a leader, you might not think it, at first. But it’s true. Leaders who are honest about the bad — just as much as the good — are better leaders.

But it’s not just me saying this. Research proves this.

In a 2013 study discussed in Forbes, researchers found that leaders who gave honest feedback were rated as five times more effective than ones who do not. In addition, leaders who gave honest feedback had employees who were rated as three times more engaged.

Employees yearn for this honest, corrective feedback. In a study shared in Harvard Business Review, 57% people preferred corrective feedback to purely praise and recognition. When further asked what was most helpful in their careers, 72% employees said they thought their performance would improve if their managers would provide corrective feedback.

In other words, people don’t just want to be patted on the back and told, “Good job.” Employees want the truth. They want to know: How can I be better? What can I change or improve?

I call this “The Bad News Advantage.” When you share bad news and honest feedback, you gain three advantages:

  1. You become a better leader.
  2. You engage your team more.
  3. You’re saying what your employees want to hear.

Leaders who understand these benefits of “The Bad News Advantage” have a leg up over others.

However, despite how helpful sharing bad news and honest feedback can be, we as leaders avoid it like the plague.

In two other surveys published in Harvard Business Review, each of nearly 8,000 managers, 44% of managers reported that they found it stressful and difficult to give negative feedback. Twenty-one percent of managers avoided giving negative feedback entirely.

Sound familiar? :-) You may have found yourself avoiding giving negative feedback or sugar-coating your words to an employee, at some point. I know I have. Giving honest feedback can feel critical, unnatural and just flat-out uncomfortable.

Des Traynor, co-founder of Intercom, knows this feeling, too. I recently interviewed him, and he candidly admitted how he’d found himself in this situation…

Des had entered a one-on-one meeting, prepared to give honest feedback to an underperforming employee. In fact, he’d written down notes beforehand of what he wanted to say.

Then, he went into the meeting to deliver the feedback.

Upon leaving the meeting, Des looked back at his notes and realized he’d said the complete opposite to the employee. He’d minced his words, and dramatically softened what was supposed to be pointed feedback.

The employee walked away thinking he didn’t need to change anything he was doing — which was not what Des was thinking.

In that moment, Des, like many of us, had forgotten “The Bad News Advantage.” He’d forgotten that when you give difficult, honest feedback…

  1. You become a better leader.
  2. You engage your team more.
  3. You’re saying what your employees want to hear.

Des is an incredibly self-aware leader to have recognized this himself. He clearly saw the lost opportunity to improve things with an employee, and has since made delivering honest feedback — no matter how bad it is — a priority as a leader.

But that’s just Des.

How about you?

I wrote this piece as the latest chapter in our Knowledge Center. Each week, we release a new chapter on how to create an open, honest company culture. To get each chapter sent straight to your inbox, sign up below…

P.S.: Please feel free to share + give this piece

2017-09-14T15:01:40+00:00 Claire Lew Signal vs. Noise Write like you talk

You’re a better writer than you let on

A handful of years ago I was volunteering for an organization here in Chicago where we helped high school kids prepare for their college applications. These kids were the first in their families, often underprivileged, to be applying to college.

One Saturday I met a student who wanted help editing his application essay. We went over to the computer lab and he pulled up a draft he’s been struggling with.

The essay was fine. It read grammatically well.

But it was terrible. It was dry and uninteresting. Artificial intelligence could have probably auto-generated it from a history of other applications.

I doubt any recruiter would remember him. How were we going to fix this?

Most of us trying to write to gain an audience, inspire people, market ourselves, etc. are all doing it wrong.

We stick with the education and rules we learned in high school and college: “Don’t end sentences with prepositions.” “Don’t start sentences with conjugations.” “Sentences have subjects and predicates.” We focus on the perfect paragraph and essay structure.

And if I asked most people to write an essay about their day. It’s likely going to come out a lot like my mentee’s. Stiff, formulaic, unoriginal.

But if we had an intimate conversation over coffee, the story about your day would be remarkably different. You wouldn’t worry about the word you used to start a sentence, or which of your sentences made up paragraphs. Instead, your struggles, achievements, and thoughts would hit my ears before you had a chance to think about: “Can I end a sentence with ‘at’?”

And because you weren’t worried about a hundred rules of grammar while you were talking to me, I’m that much closer to your internal voice.

The voice that makes you unique and interesting.

So my first step with the student above was just to ask who he was, what he does, and what he observes all day. And then I just typed what he said. A lot of it was run on sentences, and sentences without verbs. If he turned this draft into his high school English teacher, he’d have failed an assignment. So we edited it a bit to fit grammatical rules that someone reading a college essay might expect.

But what was on that computer screen was a story in his voice. A story of how just four years ago he came to the United States, poor, with a single parent, and could barely speak English.

Then over his high school career, not only did he become an amazing student, he became a man for others. He was tutoring kids in math and leading programs to help students who were in situations that he was in just a short time ago.

When he was done, I was sitting there, mouth open with goosebumps. Some jerk must have been cutting onions next to us.

His essay was original, dramatically compelling, and told an inspiring hero’s journey.

This kid was awesome. And an essay finally came to him because he stopped worrying about the correct way to write, and just wrote like he talked.

If you find yourself struggling to get who you are onto the page, record yourself talking on your phone and write out the transcript later if you need to. Just get your voice on the page first before you start worrying about a bunch of rules.

When you finally have YOU on the page, now go back and make your bits bend to the style you want them in. But be careful with spending too much time on the grammar and the rules. Go back and make sure it still flows like you’d actually say it. Read it out loud to yourself. You’ll know when you sound fake when you stutter a bit trying to read a sentence back.

Because we aren’t trying to get an A in an English class. Most of us aren’t journalists for the New York Times all trying to write in a similar and strict style.

We’re just trying to contribute to a real conversation. And we want to meet you.

P.S. You should follow me on YouTube: where I share more about how we run our business, do product design, market ourselves, and just get through life.

And if you need a zero-learning-curve system to track leads and manage follow-ups you should try Highrise.

Write like you talk was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.

2017-09-16T12:32:37+00:00 Nathan Kontny Signal vs. Noise Probability theory basics Comments ]]> 2017-09-14T15:32:35+00:00 DataTau Lessons Learned from Deploying AI in the Enterprise Comments ]]> 2017-09-16T07:02:32+00:00 DataTau Introduction to machine learning Comments ]]> 2017-09-16T16:32:33+00:00 DataTau 1.1 billion Taxi Trips on 3 Raspberry Pis running Spark Comments ]]> 2017-09-18T08:02:42+00:00 DataTau NumPy Cheat Sheet Comments ]]> 2017-09-18T14:02:40+00:00 DataTau The Ten Fallacies of Data Science Work Comments ]]> 2017-09-18T21:32:28+00:00 DataTau Made You Click: How Facebook Fed You Political Ads for Less Than a Penny

Political ads on Facebook got into your news feed at a cost of less than a penny each, highlighting the outsize reach contentious paid content can have on the social-networking site.

2017-09-15T22:43:46+00:00 Technology The Security Setting You Must Always Turn On

Personal Tech editor Wilson Rothman explains the importance of two-factor authentication—and its limitations.

2017-09-18T02:43:47+00:00 Technology How to move from m-dot URLs to responsive site

With more sites moving towards responsive web design, many webmasters have questions about migrating from separate mobile URLs, also frequently known as "m-dot URLs", to using responsive web design. Here are some recommendations on how to move from separate urls to one responsive URL in a way that gives your sites the best chance of performing well on Google's search results.

Moving to responsive sites in a Googlebot-friendly way

Once you have your responsive site ready, moving is something you can definitely do with just a bit of forethought. Considering your URLs stay the same for desktop version, all you have to do is to configure 301 redirects from the mobile URLs to the responsive web URLs.

Here are the detailed steps:

  1. Get your responsive site ready
  2. Configure 301 redirects on the old mobile URLs to point to the responsive versions (the new pages). These redirects need to be done on a per-URL basis, individually from each mobile URLs to the responsive URLs.
  3. Remove any mobile-URL specific configuration your site might have, such as conditional redirects or a vary HTTP header.
  4. As a good practice, setup rel=canonical on the responsive URLs pointing to themselves (self-referential canonicals).

If you're currently using dynamic serving and want to move to responsive design, you don't need to add or change any redirects.

Some benefits for moving to responsive web design

Moving to a responsive site should make maintenance and reporting much easier for you down the road. Aside from no longer needing to manage separate URLs for all pages, it will also make it much easier to adopt practices and technologies such as hreflang for internationalization, AMP for speed, structured data for advanced search features and more.

As always, if you need more help you can ask a question in our webmaster forum.

Posted by Cherry Prommawin, Webmaster Relations
2017-09-14T21:32:59+00:00 Google Webmaster Central Google Webmaster Central Blog