SyncLite
BUILD ANYTHING SYNC ANYWHERE
OPEN-SOURCE Real-Time LOW-Code SECURE Scalable Fault-Tolerant Extensible EXactly once semantics No VENDOR LOCK-IN
SYNCLITE Database Replication/ETL/Migration TOOL
SyncLite Database Replication/ETL/Migration tool offers flexible, scalable, and schema-aware many-to-many database replication/migration. It enables effortless orchestration of incremental and log-based database replication pipelines, manage data with precision, redefining database replication/migration experience. With SyncLite DB Reader application configured to extract data from source DB into SyncLite telemetry devices which are shared with SyncLite Consolidator via a configurable staging storage, the SyncLite consolidator performs replication into a diverse array of destination databases, data warehouse and data lakes.
Key Features
Decoupled Architecture: SyncLite DBReader and SyncLite Consolidator function independently.
Flexible Deployment: Deploy them separately, closer to the source and destination databases respectively.
Adaptable to Impedance Mismatch: Unique decoupled architecture effortlessly adapts to any impedance mismatch for efficient and scalable data extraction and ingestion.
Many-to-Many Pipelines: Orchestrates many-to-many database replication/migration pipelines.
Secure Data Delivery: Multiple DBReaders securely deliver data from various tables to centralized staging areas with an option to encrypt the data/log files on the stage.
Diverse Replication: Multiple Consolidators replicate tables/views into one or more databases/data warehouses/data lakes as per user preference.
Parallelism and Distribution Capabilities: DBReader and Consolidator offer inter-table and intra-table parallelism for data extraction and ingestion with both scale-up and scale-out options.
Schema Change Detection: DBReader is capable of identifying structural schema changes including column addition/deletion in source table/view schemas, table/view drops and replicate them.
Delete Synchronization Mode: DBReader and Consolidator provide a dedicated delete synchronization mode to synchronize deletes from source database to destination database. This is over and above the soft delete based delete replication mechanism.
Incremental Key Columns: DBReader allows specifying incremental key columns( both at default level and individual table/view level) for identifying changes in each source DB table/view.
Selective Replication: DBReader enables picking and choosing tables/columns for replication, specifying predicates, and defining parallelism strategy.
Sensitive Data Handling: DBReader provides an ability to mask data in sensitive columns before replication.
Data Type Mapping: Consolidator provides the ability to map data types from source to destination database.
Table/View and Column Filtering: Consolidator allows filtering and mapping of tables/columns.
Value Mapping: Consolidator can map column values to different values for flexible data replication.
Fine-Tuning Options: Consolidator offers fine-tuning options for optimal and fault tolerant writing on the destination database.
Zero Configuration Changes: Requires zero configuration changes on the source DB.
Statistical Reporting: Ability to maintain and publish data replication/consolidation statistics for transparent and insightful monitoring.
Broad Connector Support: Supports a wide range of connectors, including industry-leading databases, ensuring compatibility and seamless integration with diverse data ecosystems. Check out Data Integration page for supported systems.
Scheduled Execution: Both SyncLite DBReader and Consolidator provide an ability configure daily job schedules for periodically starting/stopping the job during specified intervals for specified durations.
Job Management: Both SyncLite DBReader and Consolidator provide ability to run and manage several jobs on the same host/VM, making the offering extremely flexible to maximize the usage of underlying compute across all data pipelines.
Object Grouping: SyncLite DBReader provides a capability to group source DB tables/views and enforce replication of all grouped objects in the specified order. This becomes useful when objects have interdependencies.
Object Management: SyncLite DBReader provides a fine-grained control on replication of each source table/view providing an ability to individually enable/disable, mark for reloading object schemas on next/each restart, mark for reloading objects on next/each restart.
Additional Features: Many more features to enhance data replication and migration capabilities.
Enhancing the 1-to-1 database pipeline model, you can elevate your data infrastructure as shown above by incorporating multiple DB reader applications, each extracting data from distinct source databases. Concurrently, multiple SyncLite consolidators can be employed, each directing data to a wide range of databases, data warehouses, or data lakes based on your preferences. This setup offers many-to-many data replication pipelines.
Within this fully decoupled architectural framework, you have the flexibility to orchestrate highly customizable, scalable and efficient database migration and replication pipelines. This adaptable approach empowers you to meet the specific demands of your data integration projects, while ensuring seamless and optimized operations.
DEMO : PostgreSQL To MySQL
Explore the seamless database replication capabilities of SyncLite! Watch our demo video showcasing a real-time database replication pipeline from PostgreSQL to MySQL for TPC-H data
DEMO : DB2 to ClickHouse
Explore the seamless database replication capabilities of SyncLite! Watch our demo video showcasing a real-time database replication pipeline from an on-prem DB2 to a cloud hosted ClickHouse database.
DEMO : MongoDB to FerretDB
Explore the seamless database replication capabilities of SyncLite! Watch our demo video showcasing a real-time database replication pipeline from an a MongoDB database to a FerretDB database.