The focus on human judgment: "This only works if the practitioner has enough independent expertise to distinguish a genuinely interesting provocation from confident nonsense." Pretty sure we need to prepare for an absolute deluge of confident futures nonsense.
The potential for democratization of futures work and the arrival of the kinds of simulation capabilities that we have been talking about for years. Now is the time to not just be a dramaturg but perhaps also a set designer, building the props for people to stage their own plays.
The call to action: "Invest in the parts of your practice that live at the ends: reading rooms, framing questions, designing emotional arcs, shepherding action through institutional resistance." This makes me want to get a lot better at the parts of facilitation that feel most challenging for me.
The proof will lie in the pudding, as they say, and I look forward to exploring more AI-generated outputs. Since the practice is so subjective, my guess is that each of us will have different places where we feel the outputs fall flat, sensationalize, or turn into mushy slop.
Thank you! This is the best analysis I have read of what AI may or may not do and the impact on us humans. I wish more people will read it instead of focusing on fears and threats.
Fair challenge. The piece is deliberately written as a practitioner account rather than a formal case study. I obviously can't share confidential client work, and I suspect most practitioners are in the same position. I'd also gently note that waiting for peer-reviewed case studies before engaging with a capability shift this fast-moving is itself a strategic choice, and not necessarily a safe one.
You are, of course, free to ignore all of this as a fake weak signal :)
I find that this is exactly where the true value is. Everyone of us can then experiment and test in real situations and they will all be different. That is also the difference between the contribution of a human and an AI - it is exactly venturing where AI can never flourish.
I appreciate the new mental model put forth this piece, Sami. I’ve also noticed the levelling-up in Opus 4.6, but I find that without very intentional guidance, AI-generated futures still suffer from a sort of convergence to the creative mean. That is, they weigh initial inputs and common signals & drivers too heavily and tend to come up with the “obvious” future. They have trouble finding and integrating truly unique signals, unless the human does the legwork of finding good signals and explicitly instructing the AI on how to think about them. This is similar to what happens when you get a bunch of inexperienced people to do foresight; in 2017-18, we were running foresight workshops, and we had a joke that any group working on food futures would invariably converge on “meal kits personalized to your DNA delivered by drones,” because that was the most obvious interpretation of the most salient signals at the time.
The leap forward with the newest models is that you *can* push them to create more thoughtful futures, but I still very much find that I need to do the pushing. For now at least, I think the left side of your barbell also demands heavy human guidance to get beyond the obvious. Because of that, one role of the dramaturge will be to elegantly deal with all the stakeholders who have used AI to create surface-level scenarios they’re attached to because they “created” those scenarios, but lacked the critical experience to push them to a useful place. World models may completely reshape that, though.
On the right side of the barbell, I think futurists could and should do what they’ve always seemingly been afraid of - taking concrete action. Other professions are experiencing similar barbelling (software, UX, product designers…CEOs), and at some point individual barbells will start to overlap others. The same AI tools that are causing this will allow people to be effective in operating across a wider space. Futurists who can create scenarios and then develop real product prototypes for those scenarios will be way more useful than the futurist who clings on to scenario development as their whole thing.
THANK you for this thought provoking essay.
Some highlights for me:
The focus on human judgment: "This only works if the practitioner has enough independent expertise to distinguish a genuinely interesting provocation from confident nonsense." Pretty sure we need to prepare for an absolute deluge of confident futures nonsense.
The role of human facilitation and collective sensemaking...in human time. Points to one of my favorite topics - the "new physics of collective sensemaking." (https://journals.sagepub.com/doi/10.1177/26339137251328909 and https://journals.sagepub.com/doi/10.1177/26339137251367733#core-bibr1-26339137251367733-1 and https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4751774).
The potential for democratization of futures work and the arrival of the kinds of simulation capabilities that we have been talking about for years. Now is the time to not just be a dramaturg but perhaps also a set designer, building the props for people to stage their own plays.
The call to action: "Invest in the parts of your practice that live at the ends: reading rooms, framing questions, designing emotional arcs, shepherding action through institutional resistance." This makes me want to get a lot better at the parts of facilitation that feel most challenging for me.
The proof will lie in the pudding, as they say, and I look forward to exploring more AI-generated outputs. Since the practice is so subjective, my guess is that each of us will have different places where we feel the outputs fall flat, sensationalize, or turn into mushy slop.
Thank you! This is the best analysis I have read of what AI may or may not do and the impact on us humans. I wish more people will read it instead of focusing on fears and threats.
Without a case study showing what you mean it’s all abstraction
Fair challenge. The piece is deliberately written as a practitioner account rather than a formal case study. I obviously can't share confidential client work, and I suspect most practitioners are in the same position. I'd also gently note that waiting for peer-reviewed case studies before engaging with a capability shift this fast-moving is itself a strategic choice, and not necessarily a safe one.
You are, of course, free to ignore all of this as a fake weak signal :)
I find that this is exactly where the true value is. Everyone of us can then experiment and test in real situations and they will all be different. That is also the difference between the contribution of a human and an AI - it is exactly venturing where AI can never flourish.
I appreciate the new mental model put forth this piece, Sami. I’ve also noticed the levelling-up in Opus 4.6, but I find that without very intentional guidance, AI-generated futures still suffer from a sort of convergence to the creative mean. That is, they weigh initial inputs and common signals & drivers too heavily and tend to come up with the “obvious” future. They have trouble finding and integrating truly unique signals, unless the human does the legwork of finding good signals and explicitly instructing the AI on how to think about them. This is similar to what happens when you get a bunch of inexperienced people to do foresight; in 2017-18, we were running foresight workshops, and we had a joke that any group working on food futures would invariably converge on “meal kits personalized to your DNA delivered by drones,” because that was the most obvious interpretation of the most salient signals at the time.
The leap forward with the newest models is that you *can* push them to create more thoughtful futures, but I still very much find that I need to do the pushing. For now at least, I think the left side of your barbell also demands heavy human guidance to get beyond the obvious. Because of that, one role of the dramaturge will be to elegantly deal with all the stakeholders who have used AI to create surface-level scenarios they’re attached to because they “created” those scenarios, but lacked the critical experience to push them to a useful place. World models may completely reshape that, though.
On the right side of the barbell, I think futurists could and should do what they’ve always seemingly been afraid of - taking concrete action. Other professions are experiencing similar barbelling (software, UX, product designers…CEOs), and at some point individual barbells will start to overlap others. The same AI tools that are causing this will allow people to be effective in operating across a wider space. Futurists who can create scenarios and then develop real product prototypes for those scenarios will be way more useful than the futurist who clings on to scenario development as their whole thing.