If you have fallen off a habit you want to keep, what do you do to bring yourself back to the fold? Habits are notoriously hard to build and we should try and hold on to the ones we like. And yet, blame it on externalities or on your own journey, but some habits fall off the wagon. And if we want to bring it back, because once we lost it, we realized what it brought to us, we must not let complacency partake in this resurrection. Or we risk making the effort too easy to get back to. Yes, there is such a thing as too easy I suppose.
With each new technology a society absorbs, there is churn. There is creative destruction of course, where businesses that stood on firm foundations find themselves teetering on the edge of a precipice. But there is also an assault on the habits and skills we maintained and kept a tight vigil over, lest it dropped somewhere along the journey we chose for our life.
For me, writing was one such habit. I used to be a regular here on this blog, pounding away on this WordPress and on the journalesque Tumblr I maintained for rough ideas and thoughts and stream of consciousness meanderings. The lure of AI made these habits feel like obsolete dinosaurs that had no place in the modern world. And yet, one of the most important things we can do to prepare ourselves for this new, brave world of AI is holding on to the skills we cherish and love despite the seeming advantage technology brings and the apparent redundancy it creates with those skills in question.
It’s funny also, that despite LLMs being, literally – large language models, the text output that we tend to see from them is hardly state of the art. Sure, they sound intelligent, and sure they throw up a surprising edge here and there, but for the most parts, the token vomit, or AI slop as it is colloquially called, is drivel for the most parts outside of the traditional, professional, and formal realm of the world.
Which brings me to the need, or rather the urge for me to pull myself back into writing and putting my meandering voices into some sort of structure – more for posterity and for peace of mind than for elaboration or showcasing. There is an uncanny valley with AI output that I find hard to pinpoint, but the sheer ease of the production and the ridiculous breadth of its application makes it a bit non-human. And I for one believe that we need less of that in this world. Less, but not zero. For surely, there are going to be instances where a non-human leads us to results we could never have gotten to ourselves. So it’s not something we should resist, but something we should use to expand the human canon.
I have been observing how writers are leveraging AI in their online writing. From pure research buddies to extensive editorial partnership and single-threaded ownership, humans are adapting their AI buddies across the spectrum. While I have not seen a novel way of exhibiting AI outputs, some have explored: 1) placing human and AI outputs as counterparts in a debate, potentially steelmanning arguments through AI-as-sparring-partners, 2) letting AI review human outputs and presenting the tete-a-tete as a form of evidence to boost a case, and 3) bluntly adding an “enhanced with AI” disclaimer at the top of the post, as if to wipe away the crumbles of guilt and ownership. If AI produces so much media for us so quickly and so easily, who owns the outcome that results from it? If industrialization produced so much material wealth and planetary pollution and clutter, who owns that? We gotta own our slop. I have to at least start from there.
But what comes after? Pundits have suggested watermarking AI outputs. Like real-world, online replicants that pass off as humans but aren’t so. That way, we could trace the origins of such a need as it arises in the future. Others have felt the need to anthropomorphize the tools, recognizing the thin line that separates our definition of consciousness and veiled hypocrisy. Regardless, they are not going away. Maybe hundreds of years into the future, the only recognizable vestige of human content would be aboard the Voyager probes, traversing the interstellar space, waiting to be discovered and brought back from the dead by an unrecognizable human civilization.
As we struggle to make sense of the world being shaped by AI and persevere to stay up to date with the developments, feeling like standing aboard a ship sinking gradually from under our feet, we can and must resist the temptation of giving in to the convenience these tools offer. It might be perceived as a luddite claim, but holding on to what amounts to more than a fragment of what we have cherished isn’t nostalgia. It’s rather avoiding, consciously, the ostrich effect bubbling up from the tension between growth and sustainability.
Understanding the language of math, despite the abundance of calculators, gives us a toolkit that helps make sense of the world around us. Spreadsheets eliminated drudge work for the most parts, and created new ones. But thankfully, it did not take away the sense of numbers. With LLMs, I’d expect something similar even if at a hitherto unknown pace and a seemingly unknown outcome. Our sense of stories and narratives will resist the temptations of an abundant world that is soon to be at our doorsteps.
I have noticed a slight shift in the ease with which I could put down sentences. Our brain’s neuroplasticity is wired for the dictum: if you don’t use it, you lose it. Meaning, if you stop working on the skills you cherish, you are destined to lose it. The convenience of AI makes working on these skills feel wasteful but it’s imperative for us to bulwark against this rising tide by learning to use these tools in a way that is accretive instead of destructive. It points to a similar theme we witnessed with the rise of social-media: co-creation and active participation drives happiness and enrichment while passive consumption and doom-scrolling invites cognitive turn-off and benign existence. What we choose will define us. Choose wisely fellow humans.
Leave a comment