Back to Insights
AITechnologyGovernanceRAGPrivacy

AI, don't forget about software engineering

13 November 2025

AI shines when it meets strong engineering. Many teams stitch together models, notebooks, and demos. They often skip the work that makes software safe, reliable, and maintainable. The result feels clever in a sprint, then fragile in production. Good engineering is not a luxury. It is the thing that turns ideas into dependable services. Architecture is risk management Architecture is not an ivory tower artefact. It is the plan that manages risk. A sound design separates concerns. Data flows are clear. Boundaries are explicit. Models, retrieval, storage, and serving each have their own place. You gain predictable latency, simpler scaling, and safer change. Without this, one tweak breaks three other parts. The system slows down at the worst moment. Fixes become guesswork. Reliability is a product feature Users judge AI by whether it works today and tomorrow. Reliability gives that trust. We define service levels for latency, accuracy, and cost. We add timeouts, retries, and circuit breakers. We write idempotent APIs so callers can safely retry. We plan fallbacks for model errors and empty retrievals. We stage releases, watch metrics, and roll back cleanly when needed. This discipline keeps small issues small. Security must be designed in Threats grow with every new dependency. We secure secrets, rotate keys, and use least privilege. We validate inputs and outputs, including prompts and retrieved content. We isolate model sandboxes from core data. We scan dependencies and ship a software bill of materials. We sign artefacts and verify them at deploy. These steps are not theatre. They stop real breaches. Privacy is a first-class requirement AI often touches personal and sensitive data. We practise data minimisation. We keep clear data contracts. We mask logs and remove identifiers at the edge. We track lineage from source to answer. We record consent and retention rules. We design deletion into the system. Privacy by design is faster than privacy as a late patch. It also keeps regulators and customers on your side. Data pipelines need engineering, not luck Great models fail on bad inputs. Pipelines must clean, validate, and version data. Chunking, OCR, and embeddings all need checks. We track model and data versions together. We add schema checks and quality gates. We store test corpora and golden outputs. When a source changes its format, we know before the model does. That saves days of detective work. APIs that last AI features live behind APIs. Good APIs are stable, predictable, and documented. They have pagination, rate limits, and clear errors. They return trace IDs so issues can be followed end to end. Backwards compatibility is a habit, not a hope. When APIs are well made, other teams integrate once and stay productive. When they are not, everyone pays a tax forever. MLOps needs software discipline Models are code and data together. We version both. We run evaluation suites for faithfulness and safety. We compare releases with an A or B harness. We keep rollbacks ready. We ship with infrastructure as code. We track cost and latency budgets. We plan upgrades so a new model does not break downstream tools. The team can move fast because change is safe. Observability is how we learn You cannot fix what you cannot see. We instrument the pipeline with metrics, logs, and traces. We log prompts, context, and citations with care for privacy. We keep sample payloads and red team cases. We alert on tail latency, cost spikes, and drift. We run post-incident reviews that lead to real fixes. Over time, the system becomes calmer. Testing keeps promises honest Unit tests keep helpers safe. Property tests catch edge cases. Contract tests keep services in sync. Load tests reveal hot spots and limits. Shadow tests compare new models to current ones on real traffic. We fix flakiness early so tests stay useful. Quality becomes a habit we can trust. The seasoned engineer is a guardrail Code assist tools are helpful. They are not a substitute for judgement. Every AI team needs at least one seasoned engineer. This person sees coupling before it hurts. They set standards for code, reviews, and deployments. They enforce secure defaults and good API behaviour. They mentor the team, choose tools with care, and say no when a shortcut is a future outage. Their role is like model guardrails for ethics and privacy. They keep the whole product safe. Cost and performance must be designed, not wished Great ideas fail when bills explode or latency drifts. We profile early. We choose algorithms that fit the workload. We use vectorisation before clusters. We cache what repeats. We select the smallest capable model that meets the goal. We watch cost per request and cost per correct answer. Performance is a design choice, not a later fix. Governance that enables delivery Governance is often seen as a brake. Good governance is a map. It sets clear rules for data use, evaluation, and change control. It clarifies who approves what and when. It links ethics and privacy to code and tests. Teams move faster when the path is known and repeatable. What good looks like A reliable AI service feels calm. Deployments are small and frequent. Incidents are rare and brief. APIs are boring and solid. Data quality holds steady. Security reviews are routine, not drama. The model improves without fear because the system around it is strong. Users trust the product because it acts the same way every day. AI is not only maths and data. It is software. Strong engineering turns clever into useful, and useful into trusted. Put a seasoned engineer at the heart of the team. Give them the mandate to protect architecture and code. Your models will be better for it, and your customers will notice. This article was created by people. We have used artificial intelligence (AI) to help articulate our message and refine the text. AI was employed as a tool to assist with structuring, identifying grammatical and spelling errors, and improving readability. The final document has been carefully reviewed and approved by our team.

Interested in working together?

If you're considering AI, data, or cloud modernisation, we can help you clarify what is feasible, what is safe, and what will create measurable value.

Get in touch