in reality, prediction markets produce the opposite of accurate, unbiased information. They encourage anyone with an informational edge to use their knowledge for personal financial gain. In this way, prediction markets are the perfect technology for a low-trust society, simultaneously exploiting and reifying an environment in which believing the motives behind any person or action becomes harder.
The most valuable tools in this new world won’t be the ones that generate the most code fastest. They’ll be the ones that help us think more clearly, plan more carefully, and keep the quality bar high while everything accelerates around us.
What impressed Churchill during his trip was a quality of resilience, an “unshakeable faith in a golden future,” that he did not see at home. Unlike the English, Americans were not deathly afraid of making mistakes with their money because they believed that even if they were wiped out, opportunities to make it all back, and more, would continue to present themselves. “Before disparaging American methods,” he wrote, “the English critic would do well to acquaint himself with the inherent probity and strength of the American speculative machine. It is not built to prevent crises, but to survive them.”
The man who runs naked across a football field certainly disrupts, but he does not change the rules of the game. The whole notion of disruption is adolescent: It assumes that after the teenagers make a mess, the adults will come and clean it up. But there are no adults. We own this mess.
Sometimes institutions are deprived of vitality and function, turned into a simulacrum of what they once were, so that they gird the new order rather than resisting it.
Since non-believers don’t invent the future and speculators are always on a hustle, I often turn to practitioners to get a fix on the coordinates of reality. It has always helped me maintain a sense of pragmatic optimism when the rest of the world around me seems either overtly hyperbolic or depressingly pessimistic.
people are rushing too quickly into hyped technology not understanding how to best use the tech. We’ve seen this throughout history with naive database implementations in the 1980s, the dot-com bust of the late ’90s, and the mobile web of the early 2000s. Whenever there is hype, we shuffled into the easy path, forcing the tech into the product without understanding its weaknesses. We are more worried about being left behind than actually doing something of value. We get there eventually, but only after understanding that we were asking the wrong questions. So many companies fail figuring this out.
Gunpowder’s explosive force relies on combustion, effectively a very fast form of burning, which makes it easy to detonate with a lit fuse. But nitroglycerin does not burn. Its power derives from supersonic shock waves generated by atoms of oxygen, nitrogen, and carbon rearranging themselves to form more stable bonds after a physical disturbance.
starting with business-level impact in mind doesn’t mean you are putting your customers last. It means that you are putting the commercial relationship between your business and your customers front and center, and letting that relationship guide how you learn about and build solutions for your customers.
I saw more clearly that we’re entering a dizzying age of duality in AI. Is AI going to kill our jobs or create more jobs? Yes. Did I technically build a feature in an app that has since been pushed to a hundred million users, or did I cheat my way through an assignment by leaning heavily on AI and other humans? Yes. Do I need deep foundational knowledge of software programming to be a successful coder, or can I skate by without even knowing the name of the programming language I’m using? Also yes.
Low-impact work creates more complicated products which, in turn, lead to more dependencies and conflicts to manage. Those dependencies and conflicts discourage teams from taking on work that touches on the product’s commercial core. Which, in turn, encourages more low-impact work.
Some low-impact signs to watch out for: Teams that are only accountable for operational goals like velocity or number of features delivered Teams that reverse-engineer their goals from the work they already have planned Teams that broadly resist estimating impact because it’s “too complicated” or “involves too many things outside of our control”
The proliferation of one-size-fits-all “best practices,” of sanitized case studies from Silicon Valley darlings, of “best vs. the rest” narratives, has created an environment where just about everybody working within the real-world constraints of most companies’ business and funding models will never feel like their companies are doing things “the right way.”
So if I’m correct, then the future of build vs buy will be “yes to both.” Companies will continue to buy complex and valuable component services for important parts of their business, but these components will be designed to be accessed and controlled by both humans and software. Some of that software will be AI agents acting on our behalf, and some will be customer (or system integrator) defined workflows generated from gen AI tools.
Great thinking isn't about getting to the answer fastest. It's about exploring the problem space thoroughly enough to find the best answer—or sometimes, to redefine the question itself. AI allows us to accelerate this exploratory process. It lets us rapidly test multiple approaches, challenge our assumptions, and refine our thinking in real time. But only if we engage with it as a collaborative partner rather than a vending machine.
a lot of professionals operate in a single cognitive gear: convergent thinking. They jump immediately to solutions, rush toward decisions, and mistake speed for intelligence. They've been trained by decades of quarterly reviews and daily standups to believe that having an answer—any answer—is better than exploring the problem space. This isn't intelligence. It's algorithmic behavior. And it's exactly why companies are finding it so easy to replace middle management with AI systems. If you only know how to converge, you're just a slower, more expensive algorithm.
Business Strategy: Start by asking AI to explain market analysis fundamentals and what indicators signal real opportunities versus vanity metrics. Learn what solid business cases look like compared to wishful thinking or incomplete analysis.
Complex Analysis: Always ask AI to explain its methodology step-by-step before it analyzes data so you can follow the reasoning. Have it show you the key assumptions it's making and how they might affect conclusions. Request that complex analysis be broken into smaller parts you can verify independently.