Your app should never wait on a network call to write data.
Embed a real local database — SQLite, DuckDB, Derby, H2, or HyperSQL — and get full ACID transactions at native speed with zero connectivity dependency.
SyncLite captures every committed transaction and replicates it automatically to your central store.
🚫 No custom CDC code. 🚫 No message brokers to operate. 🛡️ No data loss on network failure. This is how modern apps should handle data — and SyncLite makes it a one-day integration.
Sources produce compact binary logs → shipped to staging → Consolidator delivers in real time. Sub-second latency on local stages.
One platform, five problem domains. Pick the one you need today — the architecture handles the rest.
Embed SQLite or DuckDB in your desktop, mobile, or edge app. SyncLite replicates every write to the cloud automatically — your app keeps working offline, data syncs when connectivity returns.
Deploy hundreds of edge devices. Each runs a local embedded DB. SyncLite consolidates all of them into a single cloud database in real time — without you writing a line of replication code.
Use the SyncLiteStream API or Kafka Producer-compatible interface for high-throughput append-only event ingestion. Land events in any data warehouse or lake with exactly-once semantics.
SyncLite DBReader connects to PostgreSQL, MySQL, Oracle, SQL Server, and more. Replicate tables incrementally via watermarks, or capture changes at the binary log level for near-zero latency.
SyncLite QReader subscribes to any MQTT v3.1 broker — Mosquitto, EMQX, AWS IoT Core, Azure IoT Hub. Parse CSV or JSON payloads and land sensor data in your analytics DB in minutes.
Give AI agents a durable, queryable local memory store backed by SQLite. All state changes are automatically replicated to a central database for observability, replayability, and multi-agent coordination.
Add the jar, write standard JDBC or use a high-level API — SyncLite handles the rest.
// Standard JDBC — SyncLite captures every transaction transparently Class.forName("io.synclite.logger.SQLite"); SQLite.initialize(dbPath, conf); // reads synclite_logger.conf // To switch to DuckDB/Derby/H2/HyperSQL, change driver class + JDBC URL only. try (Connection c = DriverManager.getConnection("jdbc:synclite_sqlite:" + dbPath); Statement s = c.createStatement()) { s.execute("CREATE TABLE IF NOT EXISTS orders(id INT, item TEXT, qty INT)"); s.execute("INSERT INTO orders VALUES(1, 'widget', 100)"); // SELECT — local read from the embedded database ResultSet rs = s.executeQuery("SELECT id, item, qty FROM orders WHERE id = 1"); while (rs.next()) { /* process row */ } // UPDATE and DELETE — also captured and replicated to destination s.execute("UPDATE orders SET qty = 200 WHERE id = 1"); s.execute("DELETE FROM orders WHERE id = 1"); // ↑ all DML logged to a binary file → shipped to stage → consolidated into destination } SQLite.closeAll();
// SyncLiteStore — typed CRUD without raw SQL, schema evolution built-in Class.forName("io.synclite.logger.SQLiteStore"); SQLiteStore.initialize(dbPath, conf); try (SyncLiteStore store = SQLiteStore.open(dbPath)) { store.createTable("players", new LinkedHashMap<>(Map.of( "id", "INTEGER PRIMARY KEY", "name", "TEXT", "score", "INTEGER" ))); store.insert("players", Map.of("id", 1, "name", "Alice", "score", 100)); store.update("players", Map.of("score", 250), Map.of("name", "Alice")); store.delete("players", Map.of("id", 1)); List<Map<String,Object>> rows = store.selectAll("players"); } SQLiteStore.closeDevice(dbPath);
// SyncLiteStream — fluent append-only event ingestion Class.forName("io.synclite.logger.Streaming"); Streaming.initialize(dbPath, conf); try (SyncLiteStream stream = SyncLiteStream.open(dbPath)) { stream.createTable("events", new LinkedHashMap<>(Map.of( "ts", "BIGINT", "event_type", "TEXT", "user_id", "TEXT" ))); stream.insert("events", Map.of( "ts", System.currentTimeMillis(), "event_type", "SIGNUP", "user_id", "u1" )); // New columns added inline — schema evolves automatically stream.insertBatch("events", List.of( Map.of("ts", System.currentTimeMillis(), "event_type", "VIEW", "user_id", "u2", "source", "web"), Map.of("ts", System.currentTimeMillis(), "event_type", "PURCHASE", "user_id", "u3", "source", "app") )); }
# SyncLite DBReader — job configuration file (not application code) # Table/topic mappings are configured via the web UI at http://localhost:8080/synclite-dbreader synclite-device-dir = /opt/synclite/devices synclite-logger-configuration-file = /opt/synclite/synclite_logger.conf src-type = POSTGRESQL src-connection-string = jdbc:postgresql://pg.internal:5432/sales src-user = reader src-password = secret src-connection-timeout-s = 30 src-dbreader-method = INCREMENTAL src-dbreader-interval-s = 10 src-dbreader-batch-size = 100000 src-object-type = TABLE src-default-unique-key-column-list = id src-default-incremental-key-column-list = updated_at src-infer-schema-changes = true # DBReader handles batching, retries, checkpoints, and restarts.
# SyncLite QReader — job configuration file (not application code) # Topic-to-table mappings are configured via the web UI at http://localhost:8080/synclite-qreader synclite-device-dir = /opt/synclite/devices synclite-logger-configuration-file = /opt/synclite/synclite_logger.conf mqtt-broker-url = tcp://mqtt.example.com:1883 mqtt-qos-level = 1 mqtt-clean-session = true mqtt-broker-connection-timeout-s = 10 mqtt-broker-connection-retry-interval-s = 2 src-message-format = CSV src-message-field-delimiter = , qreader-synclite-device-type = SQLITE_APPENDER qreader-map-devices-to-single-synclite-device = true qreader-default-synclite-device-name = iot_device qreader-default-synclite-table-name = iot_events # Works with Mosquitto, EMQX, AWS IoT Core, and Azure IoT Hub.
# Python via JayDeBeApi — standard SQL, same JDBC driver import jaydebeapi, jpype jar = "/path/to/synclite-logger-<version>.jar" jpype.startJVM(jpype.getDefaultJVMPath(), f"-Djava.class.path={jar}", convertStrings=True) conn = jaydebeapi.connect( "io.synclite.logger.SQLite", "jdbc:synclite_sqlite:/home/alice/synclite/db/myapp.db", {"config": "/home/alice/synclite/synclite_logger.conf"}, jar ) cur = conn.cursor() cur.execute("CREATE TABLE IF NOT EXISTS events(id INT, payload TEXT)") cur.execute("INSERT INTO events VALUES(1, 'hello from Python')") # SELECT — local read from the embedded database cur.execute("SELECT id, payload FROM events WHERE id = 1") for row in cur.fetchall(): print(row) conn.commit() conn.close() # SyncLiteStore via JPype from io.synclite.logger import SQLiteStore, SyncLiteStore from java.nio.file import Paths from java.util import LinkedHashMap db = Paths.get("/home/alice/synclite/db/mystore.db") SQLiteStore.initialize(db, Paths.get("synclite_logger.conf")) with SyncLiteStore.open(db) as store: cols = LinkedHashMap() cols.put("id", "INTEGER PRIMARY KEY"); cols.put("name", "TEXT") store.createTable("users", cols) store.insert("users", {"id": 1, "name": "Alice"}) SQLiteStore.closeDevice(db)
# Any language — plain HTTP/JSON via SyncLite DB server # Works with Python, Go, Rust, C#, C++, Ruby, Node.js… import requests BASE = "http://localhost:5555/synclite" # 1. Initialize requests.post(BASE, json={ "db-type": "SQLITE", "db-path": "/tmp/myapp.db", "synclite-logger-config": "/tmp/synclite_logger.conf", "sql": "initialize" }) # 2. DDL requests.post(BASE, json={"db-path": "/tmp/myapp.db", "sql": "CREATE TABLE IF NOT EXISTS orders(id INT, item TEXT)"}) # 3. Batched insert requests.post(BASE, json={"db-path": "/tmp/myapp.db", "sql": "INSERT INTO orders VALUES(?, ?)", "arguments": [[1, "widget"], [2, "gadget"]]}) # 4. SELECT — local read from the embedded database resp = requests.post(BASE, json={"db-path": "/tmp/myapp.db", "sql": "SELECT id, item FROM orders"}) print(resp.json())
// Jedis-compatible API for Redis-style commands on SyncLiteKV try (SyncRedis redis = new SyncRedis("jdbc:synclite_sqlite:" + dbPath, conf)) { redis.set("session:u1", "active"); redis.hset("profile:u1", "tier", "gold"); redis.incrBy("counter:events", 1); String state = redis.get("session:u1"); } // Commands are persisted locally and replicated through the standard SyncLite pipeline.
// Kafka Producer-compatible API backed by SyncLiteStream Properties props = new Properties(); props.put("bootstrap.servers", "synclite://local"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); try (Producer<String, String> p = new KafkaProducer<>(props)) { p.send(new ProducerRecord<>("events", "u1", "signup")); p.send(new ProducerRecord<>("events", "u2", "purchase")); } // Same producer workflow, with SyncLite handling local durability and downstream replication.
SyncLite is designed to disappear into your stack — minimal config, maximum reliability.
Transactional log capture ensures every committed write is delivered exactly once to the destination — no duplicates, no gaps.
Edge devices work fully offline. Log files accumulate locally and sync automatically when connectivity is restored.
Thousands of edge devices consolidating into a single destination. One source fanning out to multiple destinations simultaneously.
Add a column on the edge and it appears in the destination automatically. No manual migration scripts.
Encrypt log files in transit with a public/private key pair. The destination only decrypts — the edge never holds the private key.
Per-device replication lag, throughput metrics, and error tracking — all in the Consolidator web UI, updated in real time.
Push commands from the cloud back to edge devices — trigger purges, reload config, run schema migrations remotely.
Entirely open-source (Apache 2.0). Works with your existing stack. Swap staging or destination without touching application code.
SyncLite Consolidator delivers to wherever your data needs to live.
Relational
Data Lakes & Analytics
NoSQL
Use only what you need. Every component is independently deployable.
| Component | What It Does | Language |
|---|---|---|
| SyncLite Logger | Embeddable JDBC driver. Wraps SQLite, DuckDB, Derby, H2, HyperSQL. Captures every SQL transaction into binary log files and ships them to staging storage. | Java · Python |
| SyncLite DB | Standalone HTTP/JSON database server. Language-agnostic access to all SyncLite device types over plain HTTP. Full transaction support, pagination, HMAC auth. | Any (HTTP) |
| SyncLite Consolidator | Always-on central sink. Reads log files from staging, replicates transactionally to one or more destinations. Web UI for job config, live metrics, and SQL analytics. | Java WAR |
| SyncLite DBReader | Database ETL and replication. Reads from PostgreSQL, MySQL, Oracle, SQL Server, and more. Supports full load, watermark-based incremental, and log-based CDC. | Java WAR |
| SyncLite QReader | IoT MQTT connector. Subscribes to any MQTT v3.1 broker, parses CSV/JSON payloads, feeds the SyncLite pipeline. Works with Mosquitto, EMQX, AWS IoT Core, Azure IoT Hub. | Java WAR |
| SyncLite Client | Interactive CLI for SyncLite devices. Connect directly (embedded) or via HTTP to a SyncLite DB server. Execute SQL, inspect data, test pipelines. | CLI |
| SyncLite Job Monitor | Unified operations dashboard. Start, stop, schedule, and monitor all Consolidator, DBReader, and QReader jobs from a single web UI. Supports cron scheduling and alerting. | Java WAR |
| SyncLite Validator | End-to-end integration testing. Generates synthetic workloads, runs them through the full pipeline, and does a row-by-row comparison of source and destination data. | Java WAR |
Questions, feedback, or just want to say hi? Here's where to find us.