Running parallel Claude Codes hacking away at my repository, building an AI pipeline in a weekend feels like I have been given the powers of an IT God. But that is software development power – your data remains either an asset or a hazard. Garbage In Garbage Out went nowhere.
If your data is bad, you will just have a confused LLM running over your crappy data and be confusing…you.
It is a useless waste of time. AI can’t save you from bad thinking, e.g. in bad documentation.
Was it mindboggingly delightful to build an end-to-end full stack app with server, AI-APIs (my solution uses OpenAIs very nice APIs), multiple mobile clients setup over a long weekend? HELL YES. But I thought about my clients, some companies, who now think their data swamps don’t drag them anymore, who now maybe think they don’t need engineers.
Software development speed does not help when the fuel for said softwares is still crude dirty syrup.
The smart companies, the ones that took care of their data, will now be flying like never before; The ones, who have a Confluence swamp of contradicting process descriptions, data assets full of undecipherable illogical mush, aged company axioms and guidance are going to be…
…stuck like beasts in a tar pit.
AI can’t deliver insight from illogical chaos. It will just be confused and confusing. AI trained on contradictory customer records will confidently hallucinate business logic that NEVER EXISTED NOR SHOULD EXIST.
No, it will not add value.
It will multiply chaos.
Data Utilization capability differences of organizations just went from large to utterly astounding.
Take the tools, go build, but if your data is still sub-par – the great gifts of AI of today, (guardrailed multiple Claude Codes building you solutions like a team of 10 tireless coders on steroids, custom GPTs, context sharing over multiple discussions e.g. with OpenAI ChatGPT projects)
….will not be available to you.
- Good news: data quality problems are solvable.
- Bad news: they don’t solve themselves.
