Since non-believers don’t invent the future and speculators are always on a hustle, I often turn to practitioners to get a fix on the coordinates of reality. It has always helped me maintain a sense of pragmatic optimism when the rest of the world around me seems either overtly hyperbolic or depressingly pessimistic.
people are rushing too quickly into hyped technology not understanding how to best use the tech. We’ve seen this throughout history with naive database implementations in the 1980s, the dot-com bust of the late ’90s, and the mobile web of the early 2000s. Whenever there is hype, we shuffled into the easy path, forcing the tech into the product without understanding its weaknesses. We are more worried about being left behind than actually doing something of value. We get there eventually, but only after understanding that we were asking the wrong questions. So many companies fail figuring this out.
Gunpowder’s explosive force relies on combustion, effectively a very fast form of burning, which makes it easy to detonate with a lit fuse. But nitroglycerin does not burn. Its power derives from supersonic shock waves generated by atoms of oxygen, nitrogen, and carbon rearranging themselves to form more stable bonds after a physical disturbance.
starting with business-level impact in mind doesn’t mean you are putting your customers last. It means that you are putting the commercial relationship between your business and your customers front and center, and letting that relationship guide how you learn about and build solutions for your customers.
I saw more clearly that we’re entering a dizzying age of duality in AI. Is AI going to kill our jobs or create more jobs? Yes. Did I technically build a feature in an app that has since been pushed to a hundred million users, or did I cheat my way through an assignment by leaning heavily on AI and other humans? Yes. Do I need deep foundational knowledge of software programming to be a successful coder, or can I skate by without even knowing the name of the programming language I’m using? Also yes.
Low-impact work creates more complicated products which, in turn, lead to more dependencies and conflicts to manage. Those dependencies and conflicts discourage teams from taking on work that touches on the product’s commercial core. Which, in turn, encourages more low-impact work.
Some low-impact signs to watch out for: Teams that are only accountable for operational goals like velocity or number of features delivered Teams that reverse-engineer their goals from the work they already have planned Teams that broadly resist estimating impact because it’s “too complicated” or “involves too many things outside of our control”
The proliferation of one-size-fits-all “best practices,” of sanitized case studies from Silicon Valley darlings, of “best vs. the rest” narratives, has created an environment where just about everybody working within the real-world constraints of most companies’ business and funding models will never feel like their companies are doing things “the right way.”
So if I’m correct, then the future of build vs buy will be “yes to both.” Companies will continue to buy complex and valuable component services for important parts of their business, but these components will be designed to be accessed and controlled by both humans and software. Some of that software will be AI agents acting on our behalf, and some will be customer (or system integrator) defined workflows generated from gen AI tools.
Great thinking isn't about getting to the answer fastest. It's about exploring the problem space thoroughly enough to find the best answer—or sometimes, to redefine the question itself. AI allows us to accelerate this exploratory process. It lets us rapidly test multiple approaches, challenge our assumptions, and refine our thinking in real time. But only if we engage with it as a collaborative partner rather than a vending machine.
a lot of professionals operate in a single cognitive gear: convergent thinking. They jump immediately to solutions, rush toward decisions, and mistake speed for intelligence. They've been trained by decades of quarterly reviews and daily standups to believe that having an answer—any answer—is better than exploring the problem space. This isn't intelligence. It's algorithmic behavior. And it's exactly why companies are finding it so easy to replace middle management with AI systems. If you only know how to converge, you're just a slower, more expensive algorithm.
Business Strategy: Start by asking AI to explain market analysis fundamentals and what indicators signal real opportunities versus vanity metrics. Learn what solid business cases look like compared to wishful thinking or incomplete analysis.
Complex Analysis: Always ask AI to explain its methodology step-by-step before it analyzes data so you can follow the reasoning. Have it show you the key assumptions it's making and how they might affect conclusions. Request that complex analysis be broken into smaller parts you can verify independently.
Creative Work: Instead of accepting final outputs, ask AI to show you its reasoning process so you can guide the direction. Clarify what assumptions it's making about your audience, brand, or goals that you should confirm or correct. Request multiple approaches so you can choose the direction that fits your specific context.
Invest your time when AI outputs could affect revenue, risk, or reputation—these high-stakes areas demand preparation. Also prioritize fields where you're currently stuck, avoiding collaboration entirely because you can't validate results. Look for areas where you already have knowledge fragments to build on, making the path to competence shorter. Focus on subjects where you'll need to explain or defend AI-generated work to stakeholders. Skip preparation when the area remains peripheral to your core work or when failure consequences are minimal. Don't invest time where true experts are readily available for validation, and avoid extensive preparation when you're just exploring or experimenting with new ideas.