
EdgeMQ: easiest way to land data in S3
POST JSON to an HTTP endpoint. Get Parquet in your S3.
3 followers
POST JSON to an HTTP endpoint. Get Parquet in your S3.
3 followers
Edge.mq is a managed data ingestion platform. It gives you a secure internet endpoint that accepts data from any source - your apps, devices, servers, or third-party services - and delivers it safely into your own Amazon S3 storage as query-ready files. Data arrives as compressed archives, raw Parquet, or fully typed and structured Parquet with automatic schema extraction - ready for your warehouse, lakehouse, or ML pipeline without any additional transformation step.







What is edge.mq?
Edge.mq is a managed data ingestion platform. It gives you a secure internet endpoint that accepts data from any source — your apps, devices, servers, or third-party services — and delivers it safely into your own Amazon S3 storage, ready for analytics, reporting, or machine learning.
There is no infrastructure to set up, no clusters to manage, and no specialised client software to install. If your application can make an HTTP request, it can send data to edge.mq.
Example use cases
E-commerce event tracking
Capture every product view, add-to-cart, and purchase event from your storefront. Data lands in S3 as structured Parquet files that your analytics or recommendation team can query directly — no separate ETL pipeline required.
IoT and sensor data collection
Millions of devices sending small readings — temperature, GPS coordinates, machine status — can post directly to edge.mq. The platform consolidates these into efficient, query-ready files in S3, handling network interruptions and reconnection storms without data loss.
Financial market data
Capture high-frequency trade and quote events during market hours. edge.mq handles the burst of activity at market open and close, landing tick data safely in S3 for compliance archives, backtesting, or live dashboard hydration.
Webhook and third-party data ingestion
Receive webhooks from payment processors, CRMs, or partner APIs through a single reliable endpoint. Even if your downstream systems are temporarily offline, the data is safely buffered and delivered to S3 for later processing.
Product analytics and clickstream
Collect user interaction events from web and mobile apps. edge.mq batches and compresses these into columnar Parquet files partitioned by date, making them immediately usable by your BI tools or data warehouse.
ML and AI training data pipelines
Stream labelled events, feature logs, or inference results into S3 as typed Parquet files. Your data science team gets clean, schema-aware datasets without writing custom ingestion code.
How it works
Edge.mq operates in three simple steps:
Send — Your application posts data to a secure HTTPS endpoint. No special client libraries needed — any HTTP client works.
Store — edge.mq durably captures every record, compresses it, and delivers it to your S3 bucket as organised files with a clear delivery confirmation.
Query — Use your preferred analytics tools (Snowflake, Databricks, DuckDB, or anything that reads Parquet and S3) to query, transform, and act on your data.
What problems does it solve?
Data loss during traffic spikes
When your systems experience sudden bursts of activity — a flash sale, market open, or a wave of sensor readings — traditional pipelines can drop records or slow to a crawl. edge.mq absorbs bursts gracefully. Every record is written to durable local storage before being confirmed, so nothing is lost even under heavy load.
Complex and expensive streaming infrastructure
Tools like Kafka and Kinesis are powerful but come with significant operational overhead: clusters to provision, partitions to tune, and specialised teams to maintain them. edge.mq replaces that complexity with a single HTTP endpoint. You send data in, it lands in S3. There are no brokers, no partitions, and no cluster management.
Slow time to value
Setting up a traditional data pipeline can take weeks of engineering effort. With edge.mq, you can go from zero to data landing in S3 in minutes. Connect your storage, generate an API key, and start sending data.
Data residency and compliance
Many organisations need to control where their data is stored geographically. edge.mq lets you choose the regions where your endpoints run and which S3 buckets your data lands in, so you stay in control of data residency from the start.
Key benefits
Simple — Send data over HTTP, receive organised files in your S3 bucket. No infrastructure to operate.
Fast — Data is acknowledged in milliseconds and typically available in S3 within a minute.
Reliable — Every record is durably stored before being confirmed. Crash recovery is automatic. Nothing is lost.
Cost-effective — Pay for what you use. No idle clusters, no minimum commitments on the free tier.
Flexible — Choose your regions, your S3 buckets, and your downstream tools. edge.mq outputs industry-standard formats (compressed segments and Parquet files) that work with Snowflake, Databricks, DuckDB, ClickHouse, and more.
Secure — Data is encrypted in transit and at rest. Authentication, per-account isolation, and least-privilege access controls are built in.
Getting started
edge.mq offers a free Starter plan with 10 GiB of ingestion per month — no credit card required. Connect your S3 bucket, generate an API key, and send your first payload in minutes.
Visit edge.mq to create your account, or read the Quickstart guide to get up and running.