AI Dreams

of a better.world
Engineering Without a Compass

Cover image by insung yoon

Engineering Without a Compass

Us software engineers can explain, in vivid detail, the difference between IAsyncDisposable and IDisposable in .NET. We can diagram a blue-green deployment strategy with canary analysis, automated rollback thresholds, and a GitOps reconciliation loop—every edge case accounted for. We engineers may well be thoughtful, politically engaged, even well-read in philosophy. And yet, there is a pattern—not universal, but common enough to be structural—where the critical faculties that are so finely honed for technical systems go quiet in other domains. A scientific paper, woven from incentives, p-hacking, selection bias, and institutional pressure, gets treated with the same implicit trust as a well-typed API contract. The second-order social costs of shipping a “reply-all by default” button, or a “streak freeze” metric in an education app, register not as unanswerable—but as unasked. Not because the engineer lacks the capacity to ask them, but because nothing in our professional formation has rewarded the asking. (For more on how this cognitive strain compounds with modern tooling, see The Modern Development Burnout Machine.)

This isn’t a failure of intelligence, but perhaps one of intellectual framing. There is friction between the engineering mindset—a world of deterministic systems, elegant solutions, and the intoxicating pursuit of “how”—and the critical mindset, which dwells in the messy, non-deterministic realms of ethics, sociology, and second-order consequence. We have trained a generation of builders to be masters of the puzzle, while inadvertently discouraging them from questioning the puzzle’s very premise. (I explored this theme of hidden assumptions in tooling more directly in Ethics in Programming: When Your IDE Has Ideology.)

From our first “Hello, World!” a software engineer’s world is built on reward systems that venerate the how. How do we scale? How do we optimise? How do we architect for resilience? These are the questions that yield promotions, conference talks, and the satisfaction of a perfectly abstracted system. The problem is usually handed down from on high—a product manager, a founder, a user story—sanctified and ready for execution.

The “why”—Why should we scale this? What is the societal cost of giving this tool to a billion users? Who will this harm, and who will it exclude?—is implicitly treated as someone else’s job. It’s relegated to the realm of “soft” problems. In a culture that worships the hard, the measurable, and the deterministic, these questions are structurally devalued.

This is the engineer’s fallacy—the belief that if a thing can be built, it should be built—even though we believe we are asking these questions. The engineer can’t help but seeing a complex social problem and reducing it to a technical one, because technical problems have elegant, satisfying solutions. The messy fallout—the harassment on that new social app, the exploitation of gig-economy workers—is not seen as a fundamental flaw in the premise. It’s a bug or something to be fixed in a future sprint.

But code is never neutral. Code is power. It decides who gets a loan, who is surveilled, whose voice is amplified, and who is hired. (This dynamic of observation and power is something I examined through the lens of sousveillance in The Power of Observation, Part II: Sousveillance as Resistance and Reclamation.) When an engineer insists we are “just” a builder, and that questions of ethics are “politics” (and therefore beneath them or outside our scope), they are engaging in an act of wilful naivety. It feels like a smart person deliberately blinding themselves to the consequences of our labour to avoid the discomfort of responsibility.

This mirrors a deeper historical pattern. The clean separation of our codebases into microservices echoes the factory assembly line, which itself emerged from military logistics. The relentless focus on individual feature optimisation echoes the free market theorists who prized individual choice above collective welfare. Even our “fairness” algorithms often employ 19th-century utilitarianism—maximising good for the majority while accepting harm to a minority as a statistical inevitability. This dependency on opaque systems controlled by a handful of corporations is itself a form of structural power—one I traced in The New Normal: Our Digital Dependency.

If our tools carry hidden intentions, then the act of programming is never merely technical. Every database schema is a small philosophy of order. Every pull request is a moral stance. Every sprint retrospective that celebrates reduced latency without accounting for the climate cost of added server farms is a value judgment.

We can begin to change this by treating ethical questions with the same rigour we apply to technical ones. We can track ethical debt alongside technical debt. Imagine a dashboard that logs unresolved privacy concerns or accessibility gaps next to bug counts. A log entry might read: “Our recommendation algorithm currently favours urban users at the expense of rural users; need geographic fairness review Q3.” Another might note: “Facial recognition system struggles with dark skin tones—awaiting upgrades before scaling.”

We can reform technical education to combine coding with critical inquiry. A computer science curriculum where creating a relational database requires debating the ethics of data ownership. A frontend course where an accessible dropdown menu is not an edge case, but a foundational skill. By anchoring technical decisions to our human consequences, we transform “best practices” from neutral truths into accountable choices. (A model for what this accountability might look like in practice—designing with community values rather than profit extraction—is something I explored in From Open Source to Open Minds: Designing with Community and Ethics at Heart.)

The terminal screen holds no inherent ethics and the Jira board has no conscience. The ideology is not in the machine; it is in the assumptions we bring to it, the questions we choose to ask, and the questions we systematically ignore. The choice, as it always has been, remains with the humans programming it. We can build tools as slot machines, optimising for dopamine drips and profit. (The question of whether machines can even participate in the creative and ethical thinking we’re asking of ourselves is one I wrestled with in Do Androids Dream of Original Ideas.) Or we can build them as mirrors, reflecting humanity back to itself—in all its complexity, its fallibility, and its potential.

Every merge request commits not just code, but a moral stance. Let’s develop with that weight—and that possibility—in mind.