ai-native/

AI-Native

Production-grade AI systems: architecture, responsible-AI engineering, observability, retrieval, and the AI-specific threat surface.

5 topics in this section
ai-native/architecture/
AI System Architecture
The architectural patterns that distinguish a working AI prototype from a production AI system — recognising that adding a model to an architecture changes the architecture's stability properties, cost profile, and failure modes in ways that don't appear in classical software engineering.
ai-native/ethics/
AI Ethics & Responsible AI
The engineering discipline of measuring and managing the harms an AI system can cause — recognising that "responsible AI" is not a value statement but a set of measurable properties that need to be designed in, tested for, and continuously monitored.
ai-native/monitoring/
AI Monitoring & Observability
The signals that tell you whether an AI system is working — recognising that classical observability (latency, error rate, throughput) is necessary but radically insufficient, because AI systems can be perfectly fast, perfectly available, and producing wrong answers all day long.
ai-native/rag/
Retrieval-Augmented Generation
The architectural pattern of grounding a language model's output in a corpus the model didn't memorise — recognising that "we'll just plug in a vector database" is the demo, and the architecture is everything between the user query and the retrieved-and-grounded answer.
ai-native/security/
AI Security
The threats that exist because the system uses a foundation model — categorically distinct from classical application security, because the *input itself* and the *model itself* are now attack surfaces with no equivalent in pre-AI architectures.