A presentation at AI_Dev in in Amsterdam, Netherlands by Oleg Šelajev
Everyone can throw together an LLM, some MCP tools, and a chat interface, and get an AI assistant we could only dream of a few years back. Add some “business logic” prompts, and you get an AI workflow; hopefully a helpful one. But how do you take it from a local hack to a production application? Typically, you drown in privacy questions, juggle npx commands for MCPs, and end up debugging OAuth flows before it hopefully starts to make sense.
In this session, we show a repeatable process for turning your local AI workflow experiments into a production-ready deployment using containerized, static configurations.
Whether you prefer chat interfaces or replace them with application UIs, you’ll leave with solid ideas for going from a cool demo to real applications without the existential dread of DevOps.
The following resources were mentioned during the presentation or are useful additional information.