Blue River’s key technology is called “see and spray.” It’s a set of cameras that fix onto crop sprayers and use deep learning to identify plants. If it sees a weed, it’ll hit it with pesticide; if it sees a crop, it’ll drop some fertilizer. All these parameters can be customized by the farmer, and Blue River claims it can save “up to 90 percent” of the volume of chemicals being sprayed, while also reducing labor costs.
That’s a worry I’ve heard before. Whether you’re a job seeker meeting a recruiter, an account manager calling a customer, or a novice getting coffee with an industry veteran, handing off communications to an assistant might give you pause. You might worry that you’ll blow the opportunity, come off impersonal or worse, arrogant.
I’ve been using Amy as my personal assistant to schedule meetings and add things to my calendar for a little over a year now. Amy is an “artificial intelligence powered assistant” that you act with as you would a person. The advantage is that Amy does all of the time-consuming email back-and-forths that come along with scheduling meetings.
There are a number of companies coming out with similar AI powered assistants, but x.ai has been my preference (I do test out others to keep up with things).
I schedule lots of meetings with clients, potential clients, boards, and colleagues (and podcasts), so anything that frees up my time while coming across with the same genuineness and kindness that I normally try to convey via email is a winner.
Over the past year, I’ve learned a good deal about how to deal with Amy as well as how to introduce or include Amy into an email thread with people who have no clue what AI is or why this personal assistant is not a real human being. I’m sure that will continue, but as a culture we are on an upward slope of awareness about AI (whether that’s because of interactions with Alexa and Siri or news stories) and the concept of having a personal assistant that is powered by AI won’t be such a novelty in a few short years.
I’ve not had anyone comment about my pretentiousness of having a personal assistant or tell me that they were annoyed or inconvenienced by the experience of working with Amy. So maybe we’re getting over our preconceptions about the role of personal assistants in a post-Siri world.
For now, I’m continually using Amy to power meetings and enjoy the experience of doing so!
Logo Rank is an AI system that understands logo design. It’s trained on a million+ logo images to give you tips and ideas. It can also be used to see if your designer took inspiration from stock icons.
Little more “machine learning” than “AI” but I’ll give them the benefit of the doubt. Still, this is a pretty handy tool I’ll be using to convince a few clients that their logo might need an update.
Don’t get me started on church logos, btw…
This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):
In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.
I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.
And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?
I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).
Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.
I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.
Maybe I just watched too much Star Trek Next Generation.
Future generations would look back and be amazed that 21st Century life was so people-centric, he said, especially in fields, such as car driving, where human fallibility put more lives at risk than was necessary.
Granted, these stories are all data-driven and lack literary flair, so human journalists still own deep reporting and analysis—for now. Narrative Sciences predicts that work written by its program will earn a Pulitzer Prize any day now, and that computers will control 90 percent of journalism in roughly 15 years. If you’re dubious about robo-journalism, check out this quiz by the New York Times to see if you can distinguish between human and robot writing.