“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.
Incredible advancement in very important science…
“With its latest AI program, AlphaFold, the company and research laboratory showed it can predict how proteins fold into 3D shapes, a fiendishly complex process that is fundamental to understanding the biological machinery of life.
Independent scientists said the breakthrough would help researchers tease apart the mechanisms that drive some diseases and pave the way for designer medicines, more nutritious crops and “green enzymes” that can break down plastic pollution.”
“The new A.I., known as Reinforce, was a kind of long-term addiction machine. It was designed to maximize users’ engagement over time by predicting which recommendations would expand their tastes and get them to watch not just one more video but many more.
Reinforce was a huge success. In a talk at an A.I. conference in February, Minmin Chen, a Google Brain researcher, said it was YouTube’s most successful launch in two years. Sitewide views increased by nearly 1 percent, she said — a gain that, at YouTube’s scale, could amount to millions more hours of daily watch time and millions more dollars in advertising revenue per year. She added that the new algorithm was already starting to alter users’ behavior.
“We can really lead the users toward a different state, versus recommending content that is familiar,” Ms. Chen said.”
via “The Making of a YouTube Radical” by Kevin Roose in the New York Times
Fantastic post… every organization, nonprofit, and church could gain valuable insight from the takeaways here:
The best path ahead is to seek out the affected stakeholders and work with them towards a fair and equitable system. If we can identify and remove bias against people with disabilities from our technologies, we will be taking an important step towards creating a society that respects and upholds the human rights of us all.
At Harrelson Agency, we manage a number of Google Ads campaigns (formerly AdWords as of July 24, 2018) for clients. Simply put, there’s really no better way to drive web traffic to a site or a landing page regardless of your budget, size, or goal. Whether you’re selling stuff, raising awareness, or trying to get more people into the door of your business or church, there’s a clear return on investment for a well-run Google Ads campaign.
We’ve been writing, tweaking, and managing these ads for years. I often tell clients its part science and part art to make everything work correctly. That may change a little after today…
Consumers today are more curious, more demanding, and they expect to get things done faster because of mobile. As a result, they expect your ads to be helpful and personalized. Doing this isn’t easy, especially at scale. That’s why we’re introducing responsive search ads. Responsive search ads combine your creativity with the power of Google’s machine learning to help you deliver relevant, valuable ads.
Simply provide up to 15 headlines and 4 description lines, and Google will do the rest. By testing different combinations, Google learns which ad creative performs best for any search query. So people searching for the same thing might see different ads based on context.
Via Google Ads Blog
Google is rolling out its new “responsive search ads” from beta today, and it does have the potential to reshape a number of processes that marketers like us use for ad campaigns. I doubt that we’ll give up on the fun whiteboard sessions where we throw ideas into the open that produce the basis for most of our managed campaigns, and I’m sure I’ll still have those “shower moment” epiphanies where the perfect headline text pops into my mind as I’m applying shampoo, but I am excited about what this could mean for our clients.
There’s no doubt in my mind that a great deal of the processes we use to build websites or manage Ads campaigns on Google and Facebook or to create memorable billboard taglines or to even write strategic plan documents will be automated and “responsive” as Google says in the coming decade. That’s why I’m betting Harrelson Agency’s future on the future and making sure that I’m staying on top of everything AI and blockchain and machine learning and augmented reality that I can.
We’ve already seen the decimation of the website development industry at the hands of democratizing creative tools like Squarespace and Weebly and Wix (as much as I dislike their pedestrian designs…). We’ll continue to see the same in other areas of marketing and advertising.
It’s worth my time to think ahead both for my clients’ bottom lines as well as Harrelson Agency’s future.
I don’t know… this feels a little like Henry Ford’s “if I had asked what people wanted, they would have said a faster horse” approach to utilizing tech to save humans time…
Like Google, we envision a future that’s based on collaboration between humans and machines. Where we seem to differ is that we believe a human handoff is essential when initiating a conversation between an AI assistant and a human. This human acknowledgement of AI preserves the human to human relationships and makes resuming the non transactional parts of the conversation much more natural. With their policy reversal, it sounds like Google has realized that you need to at least let people know they’re interacting with AI.
The theological lens through which we might view these questions is incarnation. In an age of increased engagement with disembodied digital assistants, what might it mean for the church to counterweight this with insisting on and facilitating in-person fellowship? In an era of disembodied conversation, my prayer is that the church might be a contrast society to model a more excellent way of fully-embodied community and in-person presence.
I fundamentally disagree with John Chandler here regarding the notion that smart assistants (such as Siri or Alexa or Google Assistant or Cortana or Bixby or M or… well, there are many more) lead to more antisocial behavior or the dangers of people not interacting with other people.
Chandler also invokes the (in)famous Nicholas Carr article Is Google Making Us Stupid? from 2008. One of my favorite rebukes to that article comes from a review of Carr’s subsequent book on the topic, The Shallows: How the Internet is Changing the Way We Think, Read, and Remember:
Perhaps what he needs are better strategies of self-control. Has he considered disconnecting his modem and Fedexing it to himself overnight, as some digital addicts say they have done? After all, as Steven Pinker noted a few months ago in the New York Times, ‘distraction is not a new phenomenon.’ Pinker scorns the notion that digital technologies pose a hazard to our intelligence or wellbeing. Aren’t the sciences doing well in the digital age, he asks? Aren’t philosophy, history and cultural criticism flourishing too? There is a reason the new media have caught on, Pinker observes: ‘Knowledge is increasing exponentially; human brainpower and waking hours are not.’ Without the internet, how can we possibly keep up with humanity’s ballooning intellectual output?
Socrates feared the same fears of antisocial behavior and intellectual laziness that we project onto television, music, and now the internet in regards to books and admonished Plato to stop writing them.
Smart assistants such as Siri or Alexa do pose a whole new world of possibilities for developers and companies and groups to interact with the connected world. In just a few short years, many of us (our household included) now use these assistants to do everything from schedule events on our cloud-based calendars to turn the lights off before bed. I also stream music, play audio books, ask questions, and crack riddles with Alexa, Siri, and Google Assistant on a daily basis.
While we fear the inevitability of a bleak future as depicted in the movie Her from 2013 in which human beings are completely subsumed into relationships and reality driven by their own personal digital assistants and rarely interact with others, I don’t think that’s the reality we’ll see. There’s a simple reason for that… antisocial behavior is a part of our own internal psychologies and neuropathies. Projecting these fears onto tools such as Siri is misplaced. I’d argue that positioning the church to be anti-tool to encourage incarnational relationships is misplaced as well.
This isn’t the same as arguing that “Guns don’t kill people; People kill people” although that’s an easy leap to make. But no, what I’m arguing here has to do with coming to terms with the ongoing revelation we are making and receiving about the very nature of human thought and how our brain and nervous systems work in tandem with our concept of consciousness. Understanding that newspapers, books, radio, TV, internet, and now Siri don’t make us any more or less lazy or antisocial is an important step in understanding that the core issue of incarnation relies on relationality between humans and the universe.
I do agree with Chandler that the church should be anti-cultural in the sense that it provides a way for exploration into the concept of incarnation. But positing that experience as anti-tool or anti-specific-technology seems to undercut the very notion of the incarnation’s theological and ongoing event in history (call that the kerygma or the Christ event or God Consciousness etc).
Yes, technology can be addictive or exacerbate issues. But let’s address the fundamental issues of culture and personal psychologies that the church is led to do with a healthy and holy notion of inter-personal and inner-personal relationships.
Like all artificial intelligence, the software will improve over time, as it digests more text. Even more exciting, the general strategy of In Codice Ratio—jigsaw segmentation, plus crowdsourced training of the software—could easily be adapted to read texts in other languages. This could potentially do for handwritten documents what Google Books did for printed matter: open up letters, journals, diaries, and other papers to researchers around the world, making it far easier to both read these documents and search for relevant material.
20 qubits have been entangled together and put into one network. Huge… computing is about to get “spooky” as Einstein would have said.
In high school physics, electrons bounce between two layers, like a car changing lanes. But in reality, electrons don’t exist in one place or one layer — they exist in many at the same time, a phenomenon known as quantum superposition. This odd quantum behavior offers a chance for devising a new computer language — one that uses infinite possibilities. Whereas classic computing uses bits, these calcium ions in superposition become quantum bits, or qubits. While past work had created such qubits before, the trick to making a computer is to get these qubits to talk to one another.
Sergey Brin, President of Alphabet (Google’s parent company), on the computational explosion over the last few years and near-future possibilities of quantum computing in the annual Founders Letter:
The power and potential of computation to tackle important problems has never been greater. In the last few years, the cost of computation has continued to plummet. The Pentium IIs we used in the first year of Google performed about 100 million floating point operations per second. The GPUs we use today perform about 20 trillion such operations — a factor of about 200,000 difference — and our very own TPUs are now capable of 180 trillion (180,000,000,000,000) floating point operations per second.
Even these startling gains may look small if the promise of quantum computing comes to fruition. For a specialized class of problems, quantum computers can solve them exponentially faster. For instance, if we are successful with our 72 qubit prototype, it would take millions of conventional computers to be able to emulate it. A 333 qubit error-corrected quantum computer would live up to our name, offering a 10,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000x speedup.
TPUs refers to Google’s “Tensor Processing Units” as discussed here last year.
The common notion that computers, phones, tablets etc have peaked and now we have tech that is “good enough” and has reached a nice plateau is a false lull in the upward trajectory of computing power. We’ll see tech innovations that “trickle down” to the general global population in the next decade that will cause disruption and rapid change in all parts of our lives from medicine to education to finance to government to interacting with our daily environments (and other people there).
In the shadow of Amazon’s offices in downtown Seattle, people enter a tiny grocery store, take whatever they want, and then walk out. And nobody runs after them screaming. This is what it’s like to shop at Amazon Go, the online retail giant’s vision for the future of brick-and-mortar stores. There are no checkout clerks, or even checkout stands. Instead, a smartphone app, hundreds of regular and infrared cameras on the ceiling (black on black, so they blend in), computer-vision algorithms, and machine learning work together to figure out what you’re picking up and charge you for it on a credit card connected to your Amazon account.
Blue River’s key technology is called “see and spray.” It’s a set of cameras that fix onto crop sprayers and use deep learning to identify plants. If it sees a weed, it’ll hit it with pesticide; if it sees a crop, it’ll drop some fertilizer. All these parameters can be customized by the farmer, and Blue River claims it can save “up to 90 percent” of the volume of chemicals being sprayed, while also reducing labor costs.
That’s a worry I’ve heard before. Whether you’re a job seeker meeting a recruiter, an account manager calling a customer, or a novice getting coffee with an industry veteran, handing off communications to an assistant might give you pause. You might worry that you’ll blow the opportunity, come off impersonal or worse, arrogant.
I’ve been using Amy as my personal assistant to schedule meetings and add things to my calendar for a little over a year now. Amy is an “artificial intelligence powered assistant” that you act with as you would a person. The advantage is that Amy does all of the time-consuming email back-and-forths that come along with scheduling meetings.
There are a number of companies coming out with similar AI powered assistants, but x.ai has been my preference (I do test out others to keep up with things).
I schedule lots of meetings with clients, potential clients, boards, and colleagues (and podcasts), so anything that frees up my time while coming across with the same genuineness and kindness that I normally try to convey via email is a winner.
Over the past year, I’ve learned a good deal about how to deal with Amy as well as how to introduce or include Amy into an email thread with people who have no clue what AI is or why this personal assistant is not a real human being. I’m sure that will continue, but as a culture we are on an upward slope of awareness about AI (whether that’s because of interactions with Alexa and Siri or news stories) and the concept of having a personal assistant that is powered by AI won’t be such a novelty in a few short years.
I’ve not had anyone comment about my pretentiousness of having a personal assistant or tell me that they were annoyed or inconvenienced by the experience of working with Amy. So maybe we’re getting over our preconceptions about the role of personal assistants in a post-Siri world.
For now, I’m continually using Amy to power meetings and enjoy the experience of doing so!
Logo Rank is an AI system that understands logo design. It’s trained on a million+ logo images to give you tips and ideas. It can also be used to see if your designer took inspiration from stock icons.
Little more “machine learning” than “AI” but I’ll give them the benefit of the doubt. Still, this is a pretty handy tool I’ll be using to convince a few clients that their logo might need an update.
Don’t get me started on church logos, btw…
This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):
In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.
I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.
And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?
I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).
Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.
I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.
Maybe I just watched too much Star Trek Next Generation.
Future generations would look back and be amazed that 21st Century life was so people-centric, he said, especially in fields, such as car driving, where human fallibility put more lives at risk than was necessary.
Granted, these stories are all data-driven and lack literary flair, so human journalists still own deep reporting and analysis—for now. Narrative Sciences predicts that work written by its program will earn a Pulitzer Prize any day now, and that computers will control 90 percent of journalism in roughly 15 years. If you’re dubious about robo-journalism, check out this quiz by the New York Times to see if you can distinguish between human and robot writing.