Shattered Glass & 4 Questions To Ask Ourselves About AI
From the desk of Shane Snow
Hi everyone! As I mentioned in my last post, I’m back to posting after a bit of a break. Just a reminder, I’ll be focusing on sharing full posts in these emails instead of just link roundups and we’ll still publish Ex Post Facto here and there.
In 2017, a true crime reality series called Shattered launched in the United Kingdom. The opening title sequence shows the name of the series in white on a dark background, which then shatters like glass.
In order to create this title sequence, the show’s visual effects team went to work planning how to design computer graphics of shattering glass that wouldn’t look fake. The physics of how glass breaks, how it moves, and especially how it reflects lighting, were extremely complex.
“It was going to cost a fortune,” said Christopher Webb, owner of FX WRX, who was brought in to help with the project.
So instead of paying that fortune, Webb got a real piece of glass, screen printed the word “SHATTERED” on it, and broke it in real life.
Shattered’s title sequence looks fantastic. The FX WRX team added in various clever tricks to make the glass reflect images of characters in the show, and more. And it all cost far less money and effort to create than if they’d done it all in CG.
Sometimes, the best way to solve a problem is to not use the latest fancy technology.
I’ve had this story on my mind ever since Webb told it to me. His company specializes in using analog tools in order to make special effects for film and TV—which goes against the grain in an industry that’s desperate for innovation, and therefore willing to glom onto shiny new tools without thinking through whether they’re the right tools for the job.
But we shouldn’t beat up Hollywood for doing this (I run a film and TV technology company myself; folks who work in this ‘biz just want to do a good job!). When I look at how businesses at large are adopting AI tools into their workflows, I see the same desperation, and often, the same mistakes.
Reality is, new tools that purport to make work easier can sometimes make work take longer. Often, a new tool promises us more optionality… but in practice it presents us with more time-consuming choices. Often, a new tool helps speed up one facet of a job, but creates more work in total. (Anyone who’s ever spent hours trying to get MidJourney to output the perfect image and then realized they could have gotten a stock photo, or drawn something in Photoshop in much less time, knows what I’m talking about.)
We’re living in an era where so much new technology is being developed at such a speed, that leaders can’t afford not to pay attention—or try out—tools that could help us level up. Artificial Intelligence is the tools category of the moment in this regard. But whether it’s AI or whatever comes next, it pays to remember that just because a technology is new and exciting doesn’t mean that it’s going to be better for every use case. And just because a tool is cool doesn’t mean it’s the right tool for the job.
So how do we deal with today’s barrage of new AI tools without falling victim to the unintended pitfalls of technology that’s not quite right for us right now?
It starts with understanding the ways that implementing new technologies can go wrong, so we can experiment with new tools while keeping our eyes open for signs that we’re going down unhelpful rabbit holes.
At the most basic level, leaders can ask questions that get at those common pitfalls that come with implementing new tools:
Will using this tool trade deliberate, directed work for guess-and-check work? (And if so, how will we make sure the new way of working doesn’t take longer?)
Will using this tool trade a perfect result for a quick result? (And if so, are we going to be okay with trading lower quality for higher speed?)
Will this tool cause people to think more inside the box? (And if so, is that what we want?)
What ripple effects could using this tool create for us or the system we operate in? (And are the potential second-order consequences we are okay with?)
Of course, any new tool ought to be weighed for its potential usefulness in accelerating our organizations toward our goals. But without balancing that potential gain with the potential second-order effects, we risk making our work harder, not better.
Have a stellar day!
—Shane
P.S. If you liked this post, check out my article & podcast series on Lateral Thinking and my innovation & teamwork training programs at Snow Academy.
Fair points, which chime with my thinking and feeling.