This is Culture & Code, a newsletter and podcast about Creativity and Artificial Intelligence. Culture & Code explores innovation across storytelling, technology and audiences to help creative professionals collaborate better with AI and each other.
Bomb Cyclones and AI Prediction
It took some decades but my Disaster Bingo Card is about done. I’ve lived through hurricanes, tornados, floods, blizzards, windstorms, and the 9/11 terrorist attack. So a Bomb Cyclone? Why not? And give me a double while you’re at it.
Aside from the destruction of life and property, a common thread of each disaster is how they rupture normalcy and daily life. By normalcy, I mean the ability to predict a future event that lines up with your ability to prepare, plus confidence that preparation will lead to a decent outcome. So if a normal Pac-NW weather report says rain (a safe bet in November), you match up shoes with coat and get on with life. Prediction helps you to remove inconvenience and be productive. That’s Case 1.
Case 2 is a weather report that says a Bomb Cyclone is coming. First of all, the technical term—Bombogenesis—should get an award for scientific terminology. Bombogenesis sounds like “POW!!” from the old Batman series while being pretty accurate too. Bombogenesis happens when a cold air mass collides with a warm air mass over the Pacific Ocean. Our air pressure dropped 24 millibars in 24hrs before it teamed up with marine winds off the coast. The result was near hurricane force balls of wind slamming into 60-70-80ft fir trees with shallow roots on soggy ground. My wife and I slept in the garage.
On the one hand, the prediction of the storm was spot on. They correctly called the timing and area of the most intense winds almost down to street level. But once those big fir trees started falling, it didn’t matter that the experts nailed the prediction. While the storm raged, you and your property were either lucky or unlucky. People and technology predicted. Nature and chance decided.
Trust me, the preceding is relevant to Artificial Intelligence.
This is my first disaster of the Chat GPT era. As I walked through the neighborhood, I thought about all the AI tools and services I use during a “normal” day. Not one of them — not one of them! — had the slightest relevance or utility to the challenges of recovering from the storm.
Fortunately, me and my wife just have the physical work of cleaning up. We were lucky this time. Other neighbors weren’t lucky, which will cause a data bloom of case numbers, claim forms, call records, photos, texts, chats and emails related to property and other things. No doubt there will be AI mediated services and technologies all along the way. That’s how it should be.
However, the storm also made me think of two mindsets at work regarding risk and incentives in the AI Age. One mindset is mildly dangerous. The other is extremely dangerous.
The mildly dangerous mindset is that the ultimate value of AI should revolve around how Case 1 makes life more convenient and productive. Case 1 forms 99% of the value propositions behind 99% of the AI startups out there. Case 1 is for when life is “normal”, that is, when it’s predictable and you can do something valuable if you have an accurate prediction.
Case 2 is for when life is disrupted. What AI-powered services and tools are out there that enhance my survivability in adverse conditions or specifically help me recover from a disaster? That’s a more difficult proposition in a commercial market. It’s one reason we have Government and academic organizations funding basic research. In the case of AI, however, the salaries and resources available to foundational innovators pale in comparison to the private sector.
All the while, the AI industry shows insatiable appetite for electric power, which contributes to climate change. As someone said, “I don’t want to burn up the planet just to have a better spam filter or put dog ears on my selfies.” I’m confident that gimmicky versus serious AI services/tools will rebalance in a better direction over time. I’ve seen it happen over multiple tech cycles.
But the extremely dangerous mindset we must avoid at all costs is to confuse predicting an outcome with controlling an outcome. They are not the same thing. But that is precisely what’s being sold right now by lot of AI model builders. Moreover, I believe most of these AI model builders sincerely believe that prediction and control can be packaged and delivered. When life is normal, that just might be the case.
But when life is disrupted, believing that shit can kill you.
I’ll write a future newsletter that drills into what “hardened” AI systems might look like, and the mindset for dealing with a disruptive future. Between natural disaster and human politics, that’s probably what’s on the menu in 2025.
Interview with Ramsay Wharton, Arizona Film Commission
This week’s interview is with Ramsay Wharton, Program Manager for the Arizona Commerce Authority’s Film & Digital Media Program. I met Ramsay at the AI Film 3 Festival in Glendale, AZ. One of Culture & Code’s AI short films OZYMANDIAS 2500 A.D. played on the big screen.
Ramsay and I discussed some of the ins and outs of Arizona’s aesthetic and economic environment for filmmaking plus how its evolving to keep pace with change, including AI. After all, why would a public official make a trip to an AI and Crypto Film Festival?
What are the submissions to the A.I. Film Festival? Who are the panelists? What are they speaking about? What can we glean and learn about this technology being used in the industry and across multiple sectors? — Ramsay Wharton
Ramsay and I cover those questions and more. LISTEN HERE:
Coming Soon to the Culture & Code Podcast & Newsletter
1.) An interview with a top email marketer about the role of email in an AI world
2.) One of LA’s top AI Community Leaders speaks with Culture & Code