Inside a Genius Mind: Leonardo’s Notebooks

Amazing web app here (bottom link to direct Google Experiment) focused on major themes in Leonardo’s notebooks and connecting them with machine learning. I’m a huge fan of notebooks, and I use the example of Leonardo keeping his thoughts in them all the time with my own students.

If you’re like me and really into Leonardo’s “notebooking” practices and history, I highly suggest you check out the videos Adam Savage has done on his Tested YouTube channel. Wonderful and inspiring videos. May we all find something that moves us in such a way!

Leonardo da Vinci: Inside a genius mind post:

From the stages of his life to dispelling myths, and examining his masterpieces up close, everyone can delve into Leonardo’s mind as we’ve brought together for the first time 1,300 pages from his collections of volumes and notebooks. The codices, brimming sketches, ideas, and observations, offer a window into the boundless imagination of one of history’s greatest polymaths. With the aid of Machine Learning and the curatorial expertise of Professor Martin Kemp, the accompanying experiment also called “Inside a Genius Mind” unravels these intriguing and sometimes mysterious materials.

Full experiment here!

Apple Intentions to Replace iPhones with AR

Kuo: Apple AR Headset Coming in Late 2022 With Mac-Level Computing Power – MacRumors:

Apple is intending it to support a “comprehensive range of applications” with an eye toward replacing the iPhone within ten years.

So much is going to change in our society in the next 10-15 years… electric vehicles as predominant mode of transportation, concepts like plant-based “meat” being brewed at local establishments like beer micro-brewing, the real introduction of augmented reality, and the paradigm of the slabs of glass we love being replaced by other mediums.

It’s going to be a fascinating decade ahead.

Hauntology and Valhalla

Like many who went through college and then grad-school in the religion / literature / philosophy circles, I’ve read and pondered my share of Derrida and the consequences of ontology on our “demon-haunted world” … another reason I’ve absolutely loved playing AC Valhalla (about 125 hours in at this point since picking it up over the Holidays).

A pun coined by French philosopher Jacques Derrida in the early ’90s, hauntology refers to the study of nonexistence and unreality (so the opposite of ontology). Contemporary philosopher Mark Fisher makes extensive use of this concept, describing hauntology in his book The Weird and Eerie as “the agency of the virtual … that which acts without (physically) existing.”

For me, there’s no greater example of this than in Valhalla’s ruins. While open-world games are often dominated by landscape, mirroring the history of art where scenic oil paintings—once considered inferior—grew into a position of relative dominance, the ruin has seen a similar ascendency. Just as Romantic poets mulled over the allure of rivers and mountains, a passion for ancient ruins bloomed too, with painters like J. M. W. Turner and John Constable touring Britain in search of architectural wreckage among the rolling hills.

— Read on www.wired.com/story/assassins-creed-valhalla-eerie-english-landscapes/

Googling Inside Your Church

Fascinating piece on Google Maps history and possible directions…

So Google likely knows what’s inside all of the buildings it has extracted. And as Google gets closer and closer to capturing every building in the world, it’s likely that Google will start highlighting / lighting up buildings related to queries and search results.

This will be really cool when Google’s/Waymo’s self-driving cars have AR displays. One can imagine pointing out the window at a building and being able to see what’s inside.

via Google Maps’s Moat

Voice Isn’t the Next Big Platform

This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):

In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.

via Voice Is the Next Big Platform, and Alexa Will Own It | WIRED

I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.

And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?

I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).

Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.

I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.

Maybe I just watched too much Star Trek Next Generation.

Why augmented reality’s future is more practical and rational than you realize

Bryan Richardson, Android software engineer at stable|kernel, wants you to consider this: what if firefighters could wear a helmet that could essentially see through the walls, indicating the location of a person in distress? What if that device could detect the temperature of a wall? In the near future, the amount of information that will be available through a virtual scan of our immediate environment and projected through a practical, wearable device could be immense.

Source: The Technology Behind Pokémon Go: Why Augmented Reality is the Future

Call Pokemon Go silly / stupid / trendish / absurd etc. To a certain point the game is incredibly inane. However, it does illustrate the ability of memes and mass fads to still occur in large numbers despite the “fracturing” of broadcast media and the loss of hegemonic culture.

The more immediate question to me, though, is what to do with this newfound cultural zeitgeist around AR? Surely, there will be more copycat games that try to mirror what Pokemon Go, Nintendo, and Niantic have created. Some will be “better” than Pokemon Go. Some will be direct rip offs.

Tech behemoths such as Facebook, Microsoft, Samsung, HTC, and now Google understand the long term implications of AR and are all each working towards internal and public projects to make use of this old but new intense hope and buzz around the idea of using technology to augment our human realities. I say realities because we shouldn’t forget that we experience the world based on photons bouncing off of things and going into our eyeballs through a series of organic lenses that flip them upside down onto the theater screen that is our retina before the retina pushes them through the optic nerve to our frontal cortex where our electrochemical neurons attempt to derive or make meaning from the data and process that back down our spinal cord to the rest of our bodies… there’s lots of room for variations and subjectivity given that we’re all a little different biologically and chemically.

We’re going to see a fast-moving evolution of tools for professions such as physicians, firefighters, and engineers as well as applications in the military and in classrooms etc that will cause some people pause. That always happens whether the new technology is movable type or writing or books or computers or the web.

Games (and porn unfortunately) tend to push us ahead when it comes to these sorts of tech revolutions. That will certainly be the case in terms of augmented reality. Yes, Pokemon Go is silly and people playing it “should get a life.” But remember, the interactions with that game and each other that they are making now will improve the systems of the future and save / improve lives. Also… don’t get me started on what it means to “have a life” given our electrochemical clump of neurons that we all are operating from regardless of our views on objectivity, Jesus, or etiquette.

Our AI Assisted (Near) Future

jarvis

Courtbot was built with the city of Atlanta in partnership with the Atlanta Committee for Progress to simplify the process of resolving a traffic citation. After receiving a citation, people are often unsure of what to do next. Should they should appear in court, when should they appear, how much will the fine cost, or how can they contend the citation? The default is often to show up at the courthouse and wait in line for hours. Courbot allows the public to find out more information and pay their citations

Source: CourtBot · Code for America

Merianna and I were just talking about the implications of artificial intelligence and interactions with personal assistants such as my beloved Amy.

The conversation came about after we decided to “quickly” stop by a Verizon store and upgrade her phone (she went with the iPhone SE btw… tiny but impressive). We ended up waiting for 45 mins in a relatively sparse store before being helped with a process that took all of 5 minutes. With a 7 month old baby, that’s not a fun way to spend a lunch hour break.

The AI Assistant Talk

We were in a part of town that we don’t usually visit, so I opened up the Ozlo app on my phone and decided to see what it recommended for lunch. Ozlo is a “friendly AI sidekick” that, for now, recommends meals based on user preferences in a messaging format. It’s in a closed beta, but if you’re up for experimenting, it’s not steered me wrong over the last few weeks of travel and in-town meal spots. It suggested a place that neither one of us had ever heard of, and I was quite frankly skeptical. But with the wait and a grumpy baby, we decided to try it out. Ozlo didn’t disappoint. The place was tremendous and we both loved it and promised to return often. Thanks, Ozlo.

Over lunch, we discussed Ozlo and Amy, and how personal AI assistants were going to rapidly replace the tortured experience of having to do something like visit a cell provider store for a device upgrade (of course, we could have just gone to a Best Buy or ordered straight from Apple as I do for my own devices, but most people visit their cell provider’s storefront). I said that I couldn’t wait to message Amy and tell her to find the best price on the iPhone SE 64 gig Space Grey version, order it, have it delivered next day, and hook it up to my Verizon account. Or message Amy and ask her to take care of my traffic ticket with the bank account she has access to. These are menial tasks that can somewhat be accomplished with “human” powered services like TaskRabbit, Fancy Hands, or the new Scale API. However, I’d like for my assistant to be virtual in nature because I’m an only child and I’m not very good at trusting other people to get things done in the way I want them done (working on that one!). Plus, it “feels” weird for me to hire out something that I “don’t really have time to do” even if they are willing and more than ready to accept my money in order to do it.

Ideally, I can see these personal AI assistants interfacing with the human services like Fancy Hands when something requires an actual phone call or physical world interaction that AI simply can’t (yet) perform such as picking up dry cleaning.

I don’t see this type of work flow or production flow being something just for elites or geeks, either. Slowly but surely with innovations like Siri or Google Now or just voice assisted computing, a large swath of the population (in the U.S.) is becoming familiar and engaging with the training wheels of AI driven personal assistants. It’s not unimaginable to think that very soon, my Amy will be interacting with Merianna’s Amy to help us figure out a good place and time to meet for lunch (Google Calendar is already quasi doing this, though without the personal assistant portion). Once Amy or Alexa or Siri or Cortana or whatever personality Google Home’s device will have is able to tap into services like Amy or Scale, we’re going to see some very interesting innovations in “how we get things done.” If you have a mobile device (which most adults and growing number of young people do), you will have an AI assistant that helps you get very real things done in ways that you wouldn’t think possible now.

“Nah, this is just buzzword futurisms. I’ll never do that or have that kind of technology in my life. I don’t want it.” People said the same thing about buying groceries or couches or coffee on their phones in 2005. We said the same thing about having a mobile phone in 1995. We said the same thing about having a computer in our homes in 1985. We said the same thing about ever using a computer to do anything productive in 1975. We said the same thing about using a pocket calculator in 1965.

In the very near future of compatible API’s and interconnected services, I’ll be able to message this to my AI assistant (saving me hours):

“Amy, my client needs a new website. Get that set up for me on the agency Media Temple’s account as a new WordPress install and set up four email accounts with the following names. Also, go ahead and link the site to Google Analytics and Webmaster Tools, and install Yoast to make sure the SEO is ok. I’ll send over some tags and content but pull the pictures you need from their existing account. They like having lots of white space on the site as well.”

That won’t put me out of a job, but it will make what I do even more specialized.

Whole sectors of jobs and service related positions will disappear while new jobs that we can’t think of yet will be created. If we look at the grand scheme of history, we’re just at the very beginning of the “computing revolution” or “internet revolution” and the keyboard / mouse / screen paradigm of interacting with the web and computers themselves are certainly going to change (soon, I hope).

 

Reverse Engineering Humanity

5201

“We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, ‘reverse engineer’ the values tacitly held by the culture that produced them,” they write. “These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesise that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces.

Source: Robots could learn human values by reading stories, research suggests | Books | The Guardian

Our stories are important. Our ability to have, interpret, and produce intuition is seemingly something very human. However, we’re finding out that’s not necessarily the case.

Of Siri and Hesiod

There’s a very subtle but very real history behind Siri (and Google Now and Amazon Echo’s Alexa and Microsoft’s Cortana) having a female voice and persona…

“But because the creatures in these myths are virtually identical to their creators, these narratives raise further questions, of a more profoundly philosophical nature: about creation, about the nature of consciousness, about morality and identity. What is creation, and why does the creator create? How do we distinguish between the maker and the made, between the human and the machine, once the creature, the machine, is endowed with consciousness—a mind fashioned in the image of its creator? In the image: the Greek narrative inevitably became entwined with, and enriched by, the biblical tradition, with which it has so many striking parallels. The similarities between Hesiod’s Pandora and Eve in Genesis indeed raise further questions: not least, about gender and patriarchy, about why the origins of evil are attributed to woman in both cultures.”

Source: The Robots Are Winning! by Daniel Mendelsohn | The New York Review of Books