I've been observing the evolving technology landscape through the lens of entropy. Traditionally, software development, exemplified by frameworks like Ruby on Rails, has been about precision and predictability. Write code, run tests, and if all goes well, deploy. It's a systematic, low-entropy process designed to minimize uncertainty.
AI introduces exhilarating unpredictability and creativity. You're not just executing a set of predetermined commands but engaging in a dynamic conversation that can yield surprising outcomes.
The attempt to control and predict every aspect of AI's behavior is an increasingly impractical logical extension of traditional software practices. By its nature, AI leans toward a state of higher entropy.
Using AI to regulate itself is an exciting proposition as recognition that we require a new oversight paradigm. I'm unsure how excited I am about one black box monitoring another black box but are humans actually any better? We're flawed, just like the robot.
I'm embracing this entropic state of technology because there is no choice. Those overly restrictive in their approach to AI development will find themselves at a disadvantage. There's enormous value in adopting a more open, exploratory stance.
There will be problems; there are problems. Many of them. OpenAI just trained their model on copywritten work. It is the largest copywrite violation in the history of the world. What happened? Nothing.
Oh, thanks for helping me write this commentary, OpenAI.
It boils down to the question, "What can you do about it?" I increasingly find the answer is not "Nothing." It's scarier: "Use it to my advantage."
Principal, Thought Merchants