“We are now confident we know how to build AGI…”

That statement is something that should be exciting as well as a “woah” moment to all of us. This is big and you should be paying attention.

Reflections – Sam Altman:

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

OpenAI’s Strawberry

Happening quickly…

Exclusive: OpenAI working on new reasoning technology under code name ‘Strawberry’ | Reuters:

The document describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms “deep research,” according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.

AI and Bicycle of the Mind

I don’t have the same optimism that Thompson does here, but it’s a good read and worth the thought time!

The Great Flattening – Stratechery by Ben Thompson:

What is increasingly clear, though, is that Jobs’ prediction that future changes would be even more profound raise questions about the “bicycle for the mind” analogy itself: specifically, will AI be a bicycle that we control, or an unstoppable train to destinations unknown? To put it in the same terms as the ad, will human will and initiative be flattened, or expanded?

OpenAI’s Lens on the Near Future

Newton has the best take I’ve read (and I’ve read a lot) on the ongoing OpenAI / Sam Altman situation… worth your time:

OpenAI’s alignment problem – by Casey Newton – Platformer:

At the same time, though, it’s worth asking whether we would still be so down on OpenAI’s board had Altman been focused solely on the company and its mission. There’s a world where an Altman, content to do one job and do it well, could have managed his board’s concerns while still building OpenAI into the juggernaut that until Friday it seemed destined to be.

That outcome seems preferable to the world we now find ourselves in, where AI safety folks have been made to look like laughingstocks, tech giants are building superintelligence with a profit motive, and social media flattens and polarizes the debate into warring fandoms. OpenAI’s board got almost everything wrong, but they were right to worry about the terms on which we build the future, and I suspect it will now be a long time before anyone else in this industry attempts anything other than the path of least resistance.

AI Assistants and Education in 5 Years According to Gates

I do agree with his take on what education will look like for the vast majority of young and old people with access to the web in the coming decade. Needless to say, AI is going to be a big driver of what it means to learn and how most humans experience that process in more authentic ways than currently available…

AI is about to completely change how you use computers | Bill Gates:

In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.

Education Innovation and Cognitive Artifacts

Must read from Mr. Brent Kaneft (our Head of School at Wilson Hall, where I am a teacher)…

Wise Integration: Sea Squirts, Tech Bans, and Cognitive Artifacts (Summer Series) | Brent Kaneft – Intrepid ED News:

So the strange paradox of innovation is that every innovation has the potential to be an existential threat to the physical, social, spiritual, and cognitive development of humans. The allure is the convenience (our brains are always looking to save energy!) and the potentiality innovation offers, but the human cost can be staggering, either immediately or slowly, like the impact of mold secretly growing behind an attractive wallpaper. To return to Tristan Harris’s point: machines are improving as humans downgrade in various ways. As professional educators, we have to ask whether innovation will prove detrimental to the fundamental qualities we want to develop in our students.

It’s a Different Sort of Revolution

I don’t think we’re prepared to understand how AI (especially more advanced generative AIs) will impact what we currently consider career jobs… especially for those with advanced degrees.

This represents a stark difference in past societal shifts when physical labor-focused employment and careers were impacted…

Biggest Losers of AI Boom Are Knowledge Workers, McKinsey Says – Bloomberg:

In that respect, it may be the opposite of significant technology upgrades of the past, which often came at the expense of occupations where workers had fewer educational qualifications and got paid less. Many were performing physical tasks — like the British textile workers who smashed up new cost-saving weaving machines, a movement that became known as the Luddites.

By contrast, the new shift “will challenge the attainment of multiyear degree credentials,” McKinsey said.

DeepMind AI Cracks Protein Folding

Incredible advancement in very important science…

“With its latest AI program, AlphaFold, the company and research laboratory showed it can predict how proteins fold into 3D shapes, a fiendishly complex process that is fundamental to understanding the biological machinery of life.

Independent scientists said the breakthrough would help researchers tease apart the mechanisms that drive some diseases and pave the way for designer medicines, more nutritious crops and “green enzymes” that can break down plastic pollution.”

apple.news/A2R762pmKQm-u_eRyAQnmZg

YouTube and “Reinforcing” Psychologies

“The new A.I., known as Reinforce, was a kind of long-term addiction machine. It was designed to maximize users’ engagement over time by predicting which recommendations would expand their tastes and get them to watch not just one more video but many more.

Reinforce was a huge success. In a talk at an A.I. conference in February, Minmin Chen, a Google Brain researcher, said it was YouTube’s most successful launch in two years. Sitewide views increased by nearly 1 percent, she said — a gain that, at YouTube’s scale, could amount to millions more hours of daily watch time and millions more dollars in advertising revenue per year. She added that the new algorithm was already starting to alter users’ behavior.

“We can really lead the users toward a different state, versus recommending content that is familiar,” Ms. Chen said.”

via “The Making of a YouTube Radical” by Kevin Roose in the New York Times