Philosophy of Artificial Fallibility
A research programme on the philosophical conditions under which artificial systems produce artefacts that resemble epistemic outputs—code, explanations, justified modifications—without exhibiting the structural properties those outputs would normally signal in human production. The programme combines analytic argument with empirical probes on contemporary AI systems.
Three central concepts
- Architectures of Error — the causal mechanisms behind a system's failure modes.
- Bidirectional Coherence Paradox — how competence and grounding invert across observability.
- Code Structure Evolution — what is lost when software outlives its reasons.
Three papers currently constitute the programme. Two more are planned.