Been a while since I wrote something here, so I just thought I will come back to write a short post.
I’ll write this using AI, but not using AI to draft the content as much as using AI to transcribe my speech. I am using Wispr Flow because why not?
To some extent, most of these AI tools seem to bring in an abundance of content, and it pretty much stops at that. By itself, these are pretty powerful, and they seem powerful, but at the end of the day what they are doing is just producing digital content, be it text, audio, video, etc.
I also read recently this article called “Something Big Is Happening,” and it seemed to have taken the blogosphere by storm. The core hypothesis is that there is something big happening in AI. While people are aware and there is hype around it, most people are not aware just how big it is and how fast it’s moving. The article calls for taking steps on a daily basis to be able to pick up AI skills and to understand what’s really happening, and build up a muscle so that you’re not disintermediated whenever that big thing happens. The idea being that you need to spend at least one hour every day tinkering with AI tools, playing around with it, testing its limitations and understanding its strengths. While most of us are trying out AI in some form or fashion at this point, be it to take meeting notes or to summarize or to refine a message or an email, or even to take on a particular persona to reformulate your emails. But most of these use cases seem to be pretty pedestrian, as opposed to using it for research or for more deep critical thinking. Although AI is getting there slowly and steadily, it does require a bit of setup in terms of context files, more prompts, more system engineering, et cetera. And this is what the article argues: that if you don’t continue to do that, even though right now the output may not be as insightful or as critical, it will soon reach a boundary or barrier, as it did in 2026. It will just become super intelligent.
What is the urgency? I think most people ask who are not very well versed with the technology spectrum, as well as they are not very keen on trying out something new. They expect to just transcend the adoption curve and move swiftly to the file and L product that AI would look and feel one error to get down the line. But that’s not how technology works. If you are not constantly iterating, your technology will soon be left in the dust, because you will not be able to understand the trajectory that AI took or its applications took to get to where it is. If you don’t do that, you will not be able to understand its weaknesses, its components, its building blocks for you to be able to use it more effectively. It’s like what happened with most technologies, like the internet or cloud computing. If you do not follow along as the technology matured and went through its various pitfalls and chasms, you will not be the insider that you need to be once AI becomes a more powerful tool. 1 year, 2 years, 5 years, 10 years, or whatever the time frame is down the line.
Leave a comment