How I Use AI (And How I Don't)
We live in an uncertain time.
It isn't just things like politics or the economy, which would be overwhelming enough on their own.
Our online spaces - an increasingly large aspect of the way we interact with others - are increasingly post-trust.
By which I mean:
Everything you see, you must question; by questioning, you begin to doubt.
I wrote last week about the various ways in which alien "entities" have overwhelmed Twitter. That these "bots" - Twitter accounts which look real, but are in fact AI programs - are often mundane does not diminish their alienness. They are, in a very real way, unknowable.
Even though the bot problem is significant, most users on Twitter are real people. This doesn't matter, however. Many users are bots, so we need to examine all users for signs of artificiality. Every tweet becomes subject to scrutiny, no matter how benign. In response to our efforts to detect them, AI programs become more and more "real-sounding," less polished, and our detection efforts become increasingly high-level - does this Tweet make sense given the context? Have I heard something like this before? Since I've heard it before, does that make it more likely to be human, or less? These attempts always lead to a kind of cognitive spiral, wherein your own instincts become subjects of examination. The experience is not pleasant.
The takeaway here is that some content being fake spoils the experience of all content, a strange echo of Gresham's Law ("the bad drives out the good"). Genuine content must be submitted to the same rigorous examination as everything else, adding significant cognitive burden to something that's supposed to be...you know, fun.
Which brings us back to the idea of post-trust.
No one can trust anything. Hell, even if there is a person on the other end of that Tweet, the chances that they are arguing in good faith are vanishingly small. We live in a post-4chan culture that elevates trolling for its own sake. We are in a race to see who can care the least, respect the fewest boundaries, be the most unmoored from any level of engagement with lived reality.
We say things simply to say them, not because we necessarily believe them, or can even understand them. The speech act is the only act.
There is a silver lining to this cloud:
Those who can build trust will also build power.
Make no mistake: simply because our online culture is post-trust does not mean we don’t want trust. We do. In fact, we crave it; we are social animals, evolved to exist in networks of social pressure and reward. Trust is built into how we communicate and interact, and without it we feel alienated, adrift, and alone.
As a result, developing trust with an audience - be it a company, a family, a friend group, or the viewing public - is becoming more valuable by the second. The more our cultural figures betray our trust, the more we turn towards those who won't. This points to a long-term strategy of both gaining and using power: consistently say what you mean and mean what you say.
To wit:
I thought it would be a good idea to have a discussion on AI, and more specifically, how I use AI...and how I don't.
For example, I never have and I never will use AI to write these blog posts.
It is extremely important to me that you, the reader, have an association in your head that says "Whenever I see Dan's words, I know they are actually coming from Dan." I don't use AI to come up with my ideas, to spin out countless versions of blog posts from a few sentences, to Tweet for me, or any of the other variations of AI content creation you are currently seeing.
I use AI in my writing process; for example, I use AI to help with grammar and spelling correction. I also use AI to help me find and replace some bad habits (for example, I use a LOT of "softening language" in my rough drafts; AI helps me find these and replace them with more strongly-worded alternatives).
In the case of particularly challenging pieces, where I'm not sure what to say, I will use AI to scan my rough draft and point out where my argument is weakest, or to ask questions implied by the text. This allows me to "get ahead" of the reader's experience and to address those issues in my final draft.
I sometimes (as I did with this post) use AI to help me sketch out a rough draft. I'll take my dog for a walk and record a long voice memo with all my thoughts regarding a topic. I'll then take a transcript of that voice memo, send it to an AI program and request an outline that incorporates my thoughts. I can then use that outline when I'm writing my first draft.
Here's a picture of the voice transcript I made for this post:

And here's a picture of the outline generated:

The common thread throughout all these use-cases is that I use AI like an editor, NOT as a content or idea producer.
In a post-trust world, what people want is connection. We want to know that the ideas we're coming across have some connection to real, lived experiences; that their consequences were felt, for good or for ill, by a person we can identify with and understand. This is as true in music and art as it is in business or marketing: outside of pure research scenarios, connection is what makes information influential.
Whatever your current goals, you need other people to achieve them. You need a support network, or fans, or customers. You need people to spread your message and tell others about what you're doing.
All of these things require building trust over time. The vast majority of people using AI right now are gleefully throwing all of that away in the pursuit of a few extra man- or woman-hours, prioritizing perceived "efficiency" over all else. This is a disaster: we've already polluted the internet ecosystem beyond all measure.
As The Atlantic put it recently:
"The internet is growing more hostile to humans. Google results are stuffed with search-optimized spam, unhelpful advertisements, and AI slop. Amazon has become littered with undifferentiated junk. The state of social media, meanwhile-fractured, disorienting, and prone to boosting all manner of misinformation-can be succinctly described as a cesspool."
This process, much like those at work on the environment, has gathered so much momentum that it will be impossible to stop short of legislation (not an outcome I perceive as likely).
What remains is to decide what you will do - how you will consider trust, and how you will use the tools on offer. The vast majority of what I do now centers on building trust - everything from this blog, to my podcast, to the classes I teach in Dan's Secret Society. ALL of it focuses on building trust, over time, while asking for little-to-nothing in return.
My hope in doing that is two-fold: 1.) I can have an impact on the wider world, and 2.) when the time comes for that trust to be repaid, it will be.
One can only hope.
Yours,
Dan
SOMETHING I'M READING:
A very simple, but surprisingly useful, little trick for getting your emotions under control when you're feeling overwhelmed or "flooded." Grounding it in physiology is surprisingly effective for me.

Better Questions Newsletter
Join the newsletter to receive the latest updates in your inbox.