Open Source · Apache 2.0
The Right Architecture for Data-Intensive Apps

The Sync Layer
Your App Has Been Missing

Your app should never wait on a network call to write data.
Embed a real local database — SQLite, DuckDB, Derby, H2, or HyperSQL — and get full ACID transactions at native speed with zero connectivity dependency. SyncLite captures every committed transaction and replicates it automatically to your central store.

🚫 No custom CDC code. 🚫 No message brokers to operate. 🛡️ No data loss on network failure. This is how modern apps should handle data — and SyncLite makes it a one-day integration.

Native Database Speed
Reads and writes hit an embedded engine directly — no network roundtrip, no latency penalty. Your app is always fast.
🔒
Zero Data Loss, Offline-Ready
Every committed transaction is logged locally. SyncLite delivers it to the destination when connectivity allows — exactly once, automatically.
🧹
Cut Your Data Stack in Half
Batching, retries, schema evolution, exactly-once delivery — all handled. You write SQL. SyncLite replaces 4–5 point solutions with one platform.
Read the Docs → View on GitHub
Local-First Apps Edge-to-Cloud Sync Real-Time Pipelines Database ETL & Replication IoT & MQTT Ingestion Agent Memory

How It Works

Your App
SyncLite Logger
SQLite · DuckDB · Derby · H2 · HyperSQL
──▶
Binary Log Files
Staging Storage
Local · SFTP · S3 · MinIO · Kafka
──▶
Always-On Sink
SyncLite Consolidator
──▶
Your Destination
PostgreSQL · MySQL · MS SQL Server
MongoDB · Apache Iceberg · ClickHouse…

Sources produce compact binary logs → shipped to staging → Consolidator delivers in real time. Sub-second latency on local stages.

Built for Every Data Sync Problem

One platform, five problem domains. Pick the one you need today — the architecture handles the rest.

📱

Local-First & Offline Apps

Embed SQLite or DuckDB in your desktop, mobile, or edge app. SyncLite replicates every write to the cloud automatically — your app keeps working offline, data syncs when connectivity returns.

☁️

Edge-to-Cloud Sync

Deploy hundreds of edge devices. Each runs a local embedded DB. SyncLite consolidates all of them into a single cloud database in real time — without you writing a line of replication code.

Real-Time Streaming Pipelines

Use the SyncLiteStream API or Kafka Producer-compatible interface for high-throughput append-only event ingestion. Land events in any data warehouse or lake with exactly-once semantics.

🔄

Database ETL & Replication

SyncLite DBReader connects to PostgreSQL, MySQL, Oracle, SQL Server, and more. Replicate tables incrementally via watermarks, or capture changes at the binary log level for near-zero latency.

📡

IoT & MQTT Ingestion

SyncLite QReader subscribes to any MQTT v3.1 broker — Mosquitto, EMQX, AWS IoT Core, Azure IoT Hub. Parse CSV or JSON payloads and land sensor data in your analytics DB in minutes.

🤖

Agent Memory & GenAI

Give AI agents a durable, queryable local memory store backed by SQLite. All state changes are automatically replicated to a central database for observability, replayability, and multi-agent coordination.

Start Syncing in Minutes

Add the jar, write standard JDBC or use a high-level API — SyncLite handles the rest.

// Standard JDBC — SyncLite captures every transaction transparently
Class.forName("io.synclite.logger.SQLite");
SQLite.initialize(dbPath, conf);   // reads synclite_logger.conf
// To switch to DuckDB/Derby/H2/HyperSQL, change driver class + JDBC URL only.

try (Connection c = DriverManager.getConnection("jdbc:synclite_sqlite:" + dbPath);
     Statement  s = c.createStatement()) {

    s.execute("CREATE TABLE IF NOT EXISTS orders(id INT, item TEXT, qty INT)");
    s.execute("INSERT INTO orders VALUES(1, 'widget', 100)");

    // SELECT — local read from the embedded database
    ResultSet rs = s.executeQuery("SELECT id, item, qty FROM orders WHERE id = 1");
    while (rs.next()) { /* process row */ }

    // UPDATE and DELETE — also captured and replicated to destination
    s.execute("UPDATE orders SET qty = 200 WHERE id = 1");
    s.execute("DELETE FROM orders WHERE id = 1");
    // ↑ all DML logged to a binary file → shipped to stage → consolidated into destination
}
SQLite.closeAll();
// SyncLiteStore — typed CRUD without raw SQL, schema evolution built-in
Class.forName("io.synclite.logger.SQLiteStore");
SQLiteStore.initialize(dbPath, conf);

try (SyncLiteStore store = SQLiteStore.open(dbPath)) {

    store.createTable("players", new LinkedHashMap<>(Map.of(
        "id", "INTEGER PRIMARY KEY", "name", "TEXT", "score", "INTEGER"
    )));

    store.insert("players", Map.of("id", 1, "name", "Alice", "score", 100));
    store.update("players", Map.of("score", 250), Map.of("name", "Alice"));
    store.delete("players", Map.of("id", 1));

    List<Map<String,Object>> rows = store.selectAll("players");
}
SQLiteStore.closeDevice(dbPath);
// SyncLiteStream — fluent append-only event ingestion
Class.forName("io.synclite.logger.Streaming");
Streaming.initialize(dbPath, conf);

try (SyncLiteStream stream = SyncLiteStream.open(dbPath)) {

    stream.createTable("events", new LinkedHashMap<>(Map.of(
        "ts", "BIGINT", "event_type", "TEXT", "user_id", "TEXT"
    )));

    stream.insert("events", Map.of(
        "ts", System.currentTimeMillis(), "event_type", "SIGNUP", "user_id", "u1"
    ));

    // New columns added inline — schema evolves automatically
    stream.insertBatch("events", List.of(
        Map.of("ts", System.currentTimeMillis(), "event_type", "VIEW",     "user_id", "u2", "source", "web"),
        Map.of("ts", System.currentTimeMillis(), "event_type", "PURCHASE", "user_id", "u3", "source", "app")
    ));
}
# SyncLite DBReader — job configuration file (not application code)
# Table/topic mappings are configured via the web UI at http://localhost:8080/synclite-dbreader

synclite-device-dir = /opt/synclite/devices
synclite-logger-configuration-file = /opt/synclite/synclite_logger.conf

src-type = POSTGRESQL
src-connection-string = jdbc:postgresql://pg.internal:5432/sales
src-user = reader
src-password = secret
src-connection-timeout-s = 30

src-dbreader-method = INCREMENTAL
src-dbreader-interval-s = 10
src-dbreader-batch-size = 100000

src-object-type = TABLE
src-default-unique-key-column-list = id
src-default-incremental-key-column-list = updated_at
src-infer-schema-changes = true

# DBReader handles batching, retries, checkpoints, and restarts.
# SyncLite QReader — job configuration file (not application code)
# Topic-to-table mappings are configured via the web UI at http://localhost:8080/synclite-qreader

synclite-device-dir = /opt/synclite/devices
synclite-logger-configuration-file = /opt/synclite/synclite_logger.conf

mqtt-broker-url = tcp://mqtt.example.com:1883
mqtt-qos-level = 1
mqtt-clean-session = true
mqtt-broker-connection-timeout-s = 10
mqtt-broker-connection-retry-interval-s = 2

src-message-format = CSV
src-message-field-delimiter = ,

qreader-synclite-device-type = SQLITE_APPENDER
qreader-map-devices-to-single-synclite-device = true
qreader-default-synclite-device-name = iot_device
qreader-default-synclite-table-name = iot_events

# Works with Mosquitto, EMQX, AWS IoT Core, and Azure IoT Hub.
# Python via JayDeBeApi — standard SQL, same JDBC driver
import jaydebeapi, jpype

jar = "/path/to/synclite-logger-<version>.jar"
jpype.startJVM(jpype.getDefaultJVMPath(), f"-Djava.class.path={jar}", convertStrings=True)

conn = jaydebeapi.connect(
    "io.synclite.logger.SQLite",
    "jdbc:synclite_sqlite:/home/alice/synclite/db/myapp.db",
    {"config": "/home/alice/synclite/synclite_logger.conf"},
    jar
)
cur = conn.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS events(id INT, payload TEXT)")
cur.execute("INSERT INTO events VALUES(1, 'hello from Python')")

# SELECT — local read from the embedded database
cur.execute("SELECT id, payload FROM events WHERE id = 1")
for row in cur.fetchall(): print(row)

conn.commit()
conn.close()


# SyncLiteStore via JPype
from io.synclite.logger import SQLiteStore, SyncLiteStore
from java.nio.file import Paths
from java.util import LinkedHashMap

db = Paths.get("/home/alice/synclite/db/mystore.db")
SQLiteStore.initialize(db, Paths.get("synclite_logger.conf"))
with SyncLiteStore.open(db) as store:
    cols = LinkedHashMap()
    cols.put("id", "INTEGER PRIMARY KEY"); cols.put("name", "TEXT")
    store.createTable("users", cols)
    store.insert("users", {"id": 1, "name": "Alice"})
SQLiteStore.closeDevice(db)
# Any language — plain HTTP/JSON via SyncLite DB server
# Works with Python, Go, Rust, C#, C++, Ruby, Node.js…
import requests

BASE = "http://localhost:5555/synclite"

# 1. Initialize
requests.post(BASE, json={
    "db-type": "SQLITE", "db-path": "/tmp/myapp.db",
    "synclite-logger-config": "/tmp/synclite_logger.conf", "sql": "initialize"
})

# 2. DDL
requests.post(BASE, json={"db-path": "/tmp/myapp.db",
    "sql": "CREATE TABLE IF NOT EXISTS orders(id INT, item TEXT)"})

# 3. Batched insert
requests.post(BASE, json={"db-path": "/tmp/myapp.db",
    "sql": "INSERT INTO orders VALUES(?, ?)",
    "arguments": [[1, "widget"], [2, "gadget"]]})

# 4. SELECT — local read from the embedded database
resp = requests.post(BASE, json={"db-path": "/tmp/myapp.db",
    "sql": "SELECT id, item FROM orders"})
print(resp.json())
// Jedis-compatible API for Redis-style commands on SyncLiteKV
try (SyncRedis redis = new SyncRedis("jdbc:synclite_sqlite:" + dbPath, conf)) {

  redis.set("session:u1", "active");
  redis.hset("profile:u1", "tier", "gold");
  redis.incrBy("counter:events", 1);

  String state = redis.get("session:u1");
}
// Commands are persisted locally and replicated through the standard SyncLite pipeline.
// Kafka Producer-compatible API backed by SyncLiteStream
Properties props = new Properties();
props.put("bootstrap.servers", "synclite://local");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

try (Producer<String, String> p = new KafkaProducer<>(props)) {
  p.send(new ProducerRecord<>("events", "u1", "signup"));
  p.send(new ProducerRecord<>("events", "u2", "purchase"));
}
// Same producer workflow, with SyncLite handling local durability and downstream replication.

Everything You Need. Nothing You Don't.

SyncLite is designed to disappear into your stack — minimal config, maximum reliability.

🔁

Exactly-Once Semantics

Transactional log capture ensures every committed write is delivered exactly once to the destination — no duplicates, no gaps.

📶

Offline Resilient

Edge devices work fully offline. Log files accumulate locally and sync automatically when connectivity is restored.

🔀

Many-to-Many

Thousands of edge devices consolidating into a single destination. One source fanning out to multiple destinations simultaneously.

🧬

Schema Evolution

Add a column on the edge and it appears in the destination automatically. No manual migration scripts.

🔐

Log Encryption

Encrypt log files in transit with a public/private key pair. The destination only decrypts — the edge never holds the private key.

📊

Live Dashboard

Per-device replication lag, throughput metrics, and error tracking — all in the Consolidator web UI, updated in real time.

🔧

Bi-Directional Commands

Push commands from the cloud back to edge devices — trigger purges, reload config, run schema migrations remotely.

🚫

No Vendor Lock-In

Entirely open-source (Apache 2.0). Works with your existing stack. Swap staging or destination without touching application code.

Connect to Any System

SyncLite Consolidator delivers to wherever your data needs to live.

Relational

PostgreSQLMySQLMicrosoft SQL Server SQLiteDuckDB

Data Lakes & Analytics

Apache IcebergClickHouse

NoSQL

MongoDB

A Complete Platform, Modular by Design

Use only what you need. Every component is independently deployable.

ComponentWhat It DoesLanguage
SyncLite Logger Embeddable JDBC driver. Wraps SQLite, DuckDB, Derby, H2, HyperSQL. Captures every SQL transaction into binary log files and ships them to staging storage. Java · Python
SyncLite DB Standalone HTTP/JSON database server. Language-agnostic access to all SyncLite device types over plain HTTP. Full transaction support, pagination, HMAC auth. Any (HTTP)
SyncLite Consolidator Always-on central sink. Reads log files from staging, replicates transactionally to one or more destinations. Web UI for job config, live metrics, and SQL analytics. Java WAR
SyncLite DBReader Database ETL and replication. Reads from PostgreSQL, MySQL, Oracle, SQL Server, and more. Supports full load, watermark-based incremental, and log-based CDC. Java WAR
SyncLite QReader IoT MQTT connector. Subscribes to any MQTT v3.1 broker, parses CSV/JSON payloads, feeds the SyncLite pipeline. Works with Mosquitto, EMQX, AWS IoT Core, Azure IoT Hub. Java WAR
SyncLite Client Interactive CLI for SyncLite devices. Connect directly (embedded) or via HTTP to a SyncLite DB server. Execute SQL, inspect data, test pipelines. CLI
SyncLite Job Monitor Unified operations dashboard. Start, stop, schedule, and monitor all Consolidator, DBReader, and QReader jobs from a single web UI. Supports cron scheduling and alerting. Java WAR
SyncLite Validator End-to-end integration testing. Generates synthetic workloads, runs them through the full pipeline, and does a row-by-row comparison of source and destination data. Java WAR

Contact

Questions, feedback, or just want to say hi? Here's where to find us.

✉️
Support Email
Need help with setup, configuration, or production use?
support@synclite.io
🐙
GitHub Issues
Found a bug or have a feature request? Open an issue on GitHub.
Open an Issue →
📖
Documentation
Full platform docs, quickstarts, config reference, and API guides.
Read the Docs →
🤝
Contribute
PRs welcome. Read the contributing guide before opening a pull request.
Contributing Guide →
ℹ️
About SyncLite
Mission, patent info, author background, and the story behind the platform.
Read About Page →

Ship Your First Sync in Under 5 Minutes

One command deploys Tomcat, downloads OpenJDK 25, and starts all SyncLite apps. No cloud account. No signup. Fully local.

Read the Docs → ★ Star on GitHub View Patent Details
git clone --recurse-submodules git@github.com:syncliteio/SyncLite.git
cd SyncLite/bin && ./deploy.sh && ./start.sh