Independent AI/ML Engineering.
The value of an AI project is usually determined before the first model is trained. Most projects fail at problem definition, not implementation -- a team builds something technically competent that answers the wrong question, or discovers six months in that the data does not support the goal it was meant to serve. The audit and strategy work at the start of an engagement exists to surface these problems early, when they are still cheap to resolve.
We work with organisations at the point where an AI investment is being considered or has stalled. The work covers data audit, feasibility assessment, strategy, and architecture -- the decisions that shape everything downstream. Your engineers build the system; we establish what the system should be and whether the conditions for it to succeed exist.
The organisations we work with tend to share a few characteristics: they have a genuine business problem that AI might address, they have data, and they have engineers who can build once the architecture is clear. What they are typically missing is confidence that the problem is well-defined and that the approach is sound before committing to a build.
This includes CTOs planning a first ML hire and wanting to know what that hire should be building; founders who need a feasibility assessment before committing budget; teams that have attempted an AI project and want to understand what went wrong; and engineering teams that want a senior technical voice during build to catch architectural decisions before they become expensive to reverse.
A fixed-scope engagement over two to four weeks. We audit your data infrastructure, assess feasibility, define the recommended approach, and produce an architecture design. Deliverables are written documents: a data audit report, an ML strategy document, and an architecture specification, with slide deck versions for leadership review.
Most engagements start here.
Get in touch →Advisory engagement during your team's build, typically two to three days per week over eight to twelve weeks. We review architecture decisions as they are made, surface problems before they compound, and work alongside your engineers as the system develops. The knowledge transfer is a consequence of working together, not a separate deliverable.
A monthly retainer for teams post-launch. We review system performance, monitor for drift and degradation, track relevant developments in the field, and meet quarterly to assess whether the strategy remains sound. Two to four days of engagement per month.
Engagements have included semantic search across legal, medical, and insurance document corpora; fraud classification in fintech production environments; retrieval-augmented generation for market research workflows; AI-assisted image analysis in medical diagnostics; private inference architecture for regulated sector clients with data sovereignty requirements; and feasibility assessments for organisations at early data maturity.
Structured knowledge from client work, explorations, and deployment challenges.
Principal Machine Learning Engineer & AI Consultant
Edward has a background in Computer Science, Data Visualisation and Artificial Intelligence and has spent a two decades building and deploying AI systems across academic research, regulated industry, and early-stage product development. He has worked with organisations including the Bank of England, the Office for National Statistics, and Admiral plc, as well as startups across health, adtech, and life sciences. For the past six years he has focused on the early product development phase: scoping and architecting AI systems, establishing data infrastructure, and translating ambiguous organisational needs into deliverable technical designs. Recent work spans retrieval-augmented generation, knowledge graph construction, and private cloud deployment on AWS.
Tools and products available for subscription.
Typed inference pipelines over privately hosted open-weight models. Define your task, build your pipeline, measure and improve.
Learn more →Behaviour observability for developer-founders. Structured JSON event logging with session-level aggregations, webhook triggers, and daily summaries -- no dashboards, no SDKs, no tracking pixels.
Visit Clientlog →
A multilingual children's story generator. Choose a language, a theme, and a cast of characters -- Popstory writes the story. Built with and for younger readers.
Visit Popstory →Declarative workflow execution engine. Define dependency graphs in YAML; runfox resolves and advances them.
PyPI →JSONLogic extended with JSONPath variable resolution. Compose conditional logic as data structures.
PyPI →A thin, typed wrapper over the AWS DynamoDB client. Dataclass-based item definitions with clean get, put, and query patterns.
PyPI →