“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.
At Harrelson Agency, we manage a number of Google Ads campaigns (formerly AdWords as of July 24, 2018) for clients. Simply put, there’s really no better way to drive web traffic to a site or a landing page regardless of your budget, size, or goal. Whether you’re selling stuff, raising awareness, or trying to get more people into the door of your business or church, there’s a clear return on investment for a well-run Google Ads campaign.
We’ve been writing, tweaking, and managing these ads for years. I often tell clients its part science and part art to make everything work correctly. That may change a little after today…
Consumers today are more curious, more demanding, and they expect to get things done faster because of mobile. As a result, they expect your ads to be helpful and personalized. Doing this isn’t easy, especially at scale. That’s why we’re introducing responsive search ads. Responsive search ads combine your creativity with the power of Google’s machine learning to help you deliver relevant, valuable ads.
Simply provide up to 15 headlines and 4 description lines, and Google will do the rest. By testing different combinations, Google learns which ad creative performs best for any search query. So people searching for the same thing might see different ads based on context.
Via Google Ads Blog
Google is rolling out its new “responsive search ads” from beta today, and it does have the potential to reshape a number of processes that marketers like us use for ad campaigns. I doubt that we’ll give up on the fun whiteboard sessions where we throw ideas into the open that produce the basis for most of our managed campaigns, and I’m sure I’ll still have those “shower moment” epiphanies where the perfect headline text pops into my mind as I’m applying shampoo, but I am excited about what this could mean for our clients.
There’s no doubt in my mind that a great deal of the processes we use to build websites or manage Ads campaigns on Google and Facebook or to create memorable billboard taglines or to even write strategic plan documents will be automated and “responsive” as Google says in the coming decade. That’s why I’m betting Harrelson Agency’s future on the future and making sure that I’m staying on top of everything AI and blockchain and machine learning and augmented reality that I can.
We’ve already seen the decimation of the website development industry at the hands of democratizing creative tools like Squarespace and Weebly and Wix (as much as I dislike their pedestrian designs…). We’ll continue to see the same in other areas of marketing and advertising.
It’s worth my time to think ahead both for my clients’ bottom lines as well as Harrelson Agency’s future.
I do think augmented reality and voice-first computing (Siri, Alexa, Google Assistant etc) will get us out behind computer screens and keyboards and into the “real” world. What “real” means is subjective, and that will only intensify in the coming decades as computing comes full circle to being something that we naturally do with our voices and thoughts and without the need for a keyboard and mouse.
Now we just need a good pair of AR glasses from Apple or Google or some startup we haven’t heard of yet that’s working hard in a garage to change the world…
Last year, Movers and Shakers assembled a team of coders, artists and designers who use augmented reality technology to do their work. Their goal was circumvent the city’s decision by replacing the statue and similar monuments with digital ones of other historical figures — namely, people of color and women. “I think we have an opportunity to harness the storytelling capabilities of this technology,” said Glenn Cantave, founder and lead organizer, when explaining the group’s motivations. “Who’s going to own our narrative?”
We’ve been working on this tech since the 1830’s and we’re almost to the point of mass adoption and use cases…
Magic Leap today revealed a mixed reality headset that it believes reinvents the way people will interact with computers and reality. Unlike the opaque diver’s masks of virtual reality, which replace the real world with a virtual one, Magic Leap’s device, called Lightwear, resembles goggles, which you can see through as if wearing a special pair of glasses. The goggles are tethered to a powerful pocket-sized computer, called the Lightpack, and can inject life-like moving and reactive people, robots, spaceships, anything, into a person’s view of the real world.
This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):
In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.
I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.
And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?
I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).
Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.
I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.
Maybe I just watched too much Star Trek Next Generation.
But Marsbot is important for other reasons, too. She represents a different kind of bot than the ones you see in Facebook Messenger — one that’s proactive rather than passive. She’s not a chatbot, but an interruptive bot. Crowley says that most other bots are in the model of Aladdin’s lamp: you invoke them and the genie appears. Marsbot is more in the Jiminy Cricket mode, hanging over your shoulder and chiming in when needed.
I’ve been testing out Marsbot the last few days, and I’m seriously impressed. I’ve been using the Ozlo bot for my random food suggestions based on location, time, preferences etc… and I’ve been happy with Ozlo.
However, Marsbot has something unique going on… it’s not a bot that waits for you. Rather, it’s proactive. If you’ve seen Her, you know immediately what I’m talking about.
Plus, it’s based on Foursquare’s accumulated data over the years, which is immense. Plus, it works in your text messaging app (iMessage if on Apple) where you’re used to getting personal updates or messages rather than going into another app on your device.
Messaging bots are going to be big and change the way we do computing and think of computers.
Take notice, churches 🙂
Developed by Microsoft’s research division, Tay is a virtual friend with behaviors informed by the web chatter of some 18–24-year-olds and the repartee of a handful of improvisational comedians (Microsoft declined to name them). Her purpose, unlike AI-powered virtual assistants like Facebook’s M, is almost entirely to amuse. And Tay does do that: She is simultaneously entertaining, infuriating, manic, and irreverent.