Automating the Production Schedule: Our Confluent Accelerator Journey

We just presented Taktora at Confluent's demo day. This post covers what we built during the accelerator, why we chose a streaming architecture for production scheduling, and what we learned along the way.
Why We Applied to the Confluent Startup Accelerator
Taktora is an AI production scheduling platform for mid size manufacturers. We help factories replace spreadsheet based planning with a system that adapts to the floor in real time. When we started building the product, we made an early architectural decision that shaped everything: treat the factory floor as an event stream, not a database to poll.
That decision led us to Apache Kafka and Confluent Cloud. We were already building on Kafka when the accelerator application opened. Getting accepted meant direct access to Confluent engineers who understood streaming infrastructure at a level we could not find anywhere else. For an early stage company building real time manufacturing software, that kind of technical mentorship is hard to overstate.
The Problem That Drove the Architecture
Every mid size factory runs on a production plan that breaks by 10 AM. The ERP records what already happened. The spreadsheet tries to guess what comes next. Nobody has real time context on what is actually happening on the floor right now.
Rush orders come in at the last minute. Inventory counts are off. Operators call out sick with no notice. All of these happen without warning and the planner has to react on the spot. The schedule is already broken. Everyone on the floor knows it. But rebuilding it takes 5 to 10 hours every week because the mitigation is entirely manual. One hour of unplanned downtime on a filling line costs $30,000 to $50,000.
Traditional scheduling software does not solve this because it operates in batches. A planner gathers data, runs an optimization, and generates a static plan for the next shift or week. The moment something changes on the floor, that plan is invalid. The planner is back to spreadsheets and phone calls. This is not a failure of the planners. It is a failure of the tools. A static plan cannot manage a dynamic environment.
We built Taktora to drastically shorten the time between a disruption and an updated schedule. The system takes input three ways: cameras on the line detect deviations in real time, an AI agent that managers can talk to directly, and a drag and drop interface where planners can move jobs and everything downstream reoptimizes automatically.
Why Streaming Was the Only Architecture That Made Sense
When you watch a factory floor, you see a sequence of events. A changeover starts. A machine completes a cycle. A quality check passes. A pallet of raw materials gets consumed. Each of these events carries information that should influence the production schedule immediately.
Polling a database every few minutes for state changes does not work. By the time you detect a problem, the downstream impact has already cascaded through the schedule. We needed an architecture where every event is captured, ordered, and processed as it happens. That is exactly what Kafka provides.
We model the entire production process as a series of events logged to Kafka topics. Machine state changes, production progress updates, material consumption signals, and operator actions all flow through the same streaming backbone. The production schedule itself becomes a materialized view of this event stream, always reflecting the current state of the system. When a machine goes down, the scheduling engine consumes that event within seconds, invalidates the affected portions of the schedule, and computes a new optimized sequence for all available resources.
What We Built During the Cohort
The accelerator gave us the runway and mentorship to build our complete streaming pipeline from the factory floor to the scheduling engine.

Camera feeds hit our Jetson edge device, which runs inference and extracts item counts, line velocity, and product classification data. That data publishes through MQTT into Confluent Cloud. We ingest three topics: item count, product classification, and sensor telemetry for line state, drift detection, and stoppage events.
From there, we process the streams in real time to generate alerts and production metrics. These flow in two directions: into Prometheus and Grafana for monitoring, and directly back into our scheduling engine so disruptions can automatically trigger rescheduling within seconds. A custom bridge pushes updates to the web client so the planner is always looking at live production state.
Under the hood, we use time windowed aggregation to balance throughput and latency. Counts accumulate over short intervals then flush to the dashboard and scheduler. This keeps throughput efficient without sacrificing visibility. The important thing is that every event can immediately influence a scheduling decision.
This is why we built on Confluent from day one. It gives us reliable, low latency streams we can trust to drive real time operational decisions, not just dashboards. As we scale across more lines and layer in more AI, that foundation lets us increase data volume without slowing down the system or the decisions it drives.
ShadowTraffic and the Simulation Environment
One of the biggest challenges in building factory software is that you cannot run a real production line in your office. Production jobs take hours. Changeovers take hours. You need a way to simulate realistic data at realistic speeds to validate the full pipeline end to end.
Michael Drogalis at ShadowTraffic helped us solve this. ShadowTraffic generates realistic simulated production data that flows through our entire Confluent pipeline exactly as real sensor data would. We used it to simulate two canning lines running beverage SKUs at 500 items per minute, complete with changeovers, velocity drift, and stoppage events.

This gave us the ability to validate our architecture, test our anomaly detection, and demo the full system without needing a live factory connected. It was essential for the demo day presentation and continues to be a core part of our development and testing workflow. Being able to replay and modify production scenarios without accessing a physical plant accelerates our iteration speed significantly.
What the Mentors Taught Us
Working with Tim Graczewski, Daniel Takabayashi, and Siddharth Bedekar at Confluent pushed our understanding of streaming infrastructure forward. Three lessons stood out.
First, schema design matters more than we expected. Getting our event schemas right early, especially around production state changes and changeover step completions, made everything downstream cleaner. The mentors challenged us to think about schema evolution from day one using the Confluent Schema Registry so we would not break consumers as the product evolves. In manufacturing, where integrations with ERPs and MES systems are common, backward compatible schema changes are not optional.
Second, topic partitioning has real consequences for scheduling. Events for a specific production line must be processed in order. A misordered sequence of state changes could cause the scheduler to make incorrect decisions about which jobs are completed, which machines are available, and what changeover is currently in progress. The Confluent team helped us design a partitioning strategy that guarantees ordering where it matters while still allowing parallel processing across independent lines.
Third, the architecture we built today needs to handle significantly more data tomorrow. When we add more cameras, more sensor types, and more production lines per deployment, the system cannot fall over. The accelerator workshops on stream processing prepared us to scale the platform without requiring a fundamental redesign. That forward thinking approach saved us from building something that would work for two lines but break at ten.
The Demo Day Presentation
For demo day, we showed the full loop live. The streaming dashboard displayed two canning lines with simulated production data flowing through Confluent in real time. Item counts updated live. Progress bars filled as jobs completed. When we triggered a velocity drift on Line 2, the alert fired through Confluent and fed directly back to the scheduler. The planner saw the deviation immediately without anyone having to report it manually.
We then showed the scheduling platform that all of this data feeds into. The Kanban board with jobs laid out across both lines and changeover blocks computed from our procedure templates. Each changeover is modeled as a sequence of steps based on how the factory actually runs the transition between products. Some steps always execute. Others only trigger based on what is specifically changing between the outgoing and incoming product.
We also demonstrated Friday, our AI scheduling agent. A manager types a problem in plain English, like we are out of lime flavoring, and Friday reads the full schedule, identifies every affected job across both lines, calculates the total unit impact, and presents options for how to respond. No forms, no tickets, no manual searching. The planner describes the situation and the system handles the rest.
The response from the audience validated what we believed from the start: production scheduling is a real time problem, and a streaming architecture is the right foundation for solving it.
What Comes Next
We are planning to integrate Confluent Tableflow to materialize our streaming data directly into the scheduling engine without maintaining a separate sync layer. We also want to evolve our classification pipeline into a full context engine that goes beyond identifying what product is running and actually understands the full production state of the line.
We are live in 2 factories today with $40K in annual recurring revenue and over $3M in qualified pipeline sourced through our cofounder's direct California manufacturing network. Our hardware deploys in days, not months. The streaming architecture we built during the accelerator is the foundation everything else runs on.
Building on Confluent was one of the most significant technical decisions we made. The accelerator gave us the mentorship and validation to execute it well. We are going to keep pushing real time intelligence on the factory floor.
Frequently Asked Questions
Related Posts

AI Production Scheduling Fails Without Real Floor Constraints
Artificial intelligence is being sold as an optimization tool for manufacturing, but many early systems deliver schedules that are mathematically perfect and physically impossible. A planner receives a sequence that promises unprecedented efficiency, yet...

What Makes a Production Schedule Executable on the Factory Floor?
A production schedule is only useful if the floor can run it. An executable schedule reflects the real constraints of the plant, the real sequence of work, and the real pace of change during the shift. If the schedule looks good in a planning meeting but...

Supplier Is Late, Customer Is Not: A Planner's Guide to Schedule Recovery
A supplier delay puts your schedule in jeopardy. The materials you planned for are not here, but your customer due dates are not moving. This scenario requires a systematic response, not a reactive one. The correct approach is a deliberate process of tria...
