
Meddle
Get your data, easily.
37 followers
Get your data, easily.
37 followers
Meddle is the integration platform that allows you to integrate, collect, manage and analyze both production data and IT data. Meddle unifies data from multiple sources into a single, coherent platform, eliminating fragmented information and enabling holistic operational insights. With Meddle, you can truly add value to your data by streamlining complex processes, reduce energy waste and optimize costs. Finally, you can talk to the data using natural language to understand trends in real time.








@michele_lacorte1 Congrats on the launch, Michele! “Industrial data plumbing” is usually 90% duct tape — love the idea of streaming once, normalizing, and routing everywhere. 😄
Quick Q: for brownfield factories, what’s the first “it just works” connector you see most (OPC UA / Modbus / MQTT)? And how do you handle schema + data quality when signals get messy?
This looks like a thoughtful approach to breaking down data silos. Would love to hear a real-world example of how combining production and IT data in Meddle changed an operational decision or reduced waste!
Thank you very much, @daniele_from_mapo_tapo !
We integrated some machinery for a customer and, based on the hourly electricity price, we were able to turn the machinery on or off, thus reducing electricity consumption and saving the customer money.
This is just one example of OT/IT integration. In our opinion, data must cooperate and not remain in silos!
@michele_lacorte1 Awesome! Thanks for sharing the usecase. Huge potential and so many possible integrations!
@all Hi guys, congrats on the project I really like the idea behind, I find it very fitting for various contexts. I just got a slightly more technical question regarding the architecture that stands behind data integration.
How does Meddle handle real-time data ingestion from heterogeneous sourcee? Are you usig a specific data pipeline architecture like event streaming (idk like Kafka) or ETL/ELT processes? And how do you maintain data consistecy and handle schema changes when sources update their data models without breaking existing integrations?
@all @lollomarco
Thank you for your reply!
We decided to keep the architecture very simple in order to favor on-premises use of the software while complying with cybersecurity and compliance requirements.
We use Go as our base language and leverage its features to manage data ingestion events. We also have automated systems that can adapt flexibly to schema changes and continue to execute the pipeline automatically.
This sounds like it solves such a huge industrial data pain point—no more custom adapters or one-off pipelines 👀. Love that normalized data can be routed to multiple destinations at once, plus the natural language querying for real-time trend checks. Quick thought: would it be possible to test the protocol normalization with common industrial standards before fully committing?
@yuanrong_tang @all
Thank you for your feedback!
We have opened the waitlist and, upon launch, we will offer the opportunity to test the solution for 7/14 days, so absolutely yes!