Now in Limited Beta • Free Credits for Early Adopters

Enterprise Data Synchronization Made Simple

Automate data transfer between databases, APIs, Google Sheets, and more with real-time monitoring and enterprise security.

Limited beta spots available · Free credits for early adopters · Production-ready platform

Dashboard Overview

Live
Total Jobs
24
↑ 12% this week
Active Runs
3
2 scheduled
Success Rate
98.5%
↑ 2.3% better
Connections
12
8 databases

Recent Job Runs

Salesforce → BigQuerySuccess
1,234 rows synced2 min ago
PostgreSQL → SnowflakeRunning
856 / 2,000 rowsIn progress
Google Sheets → MySQLSuccess
342 rows synced5 min ago

Active Connections

PostgreSQL
Production DB
Salesforce
Sales CRM
BigQuery
Data Warehouse
Google Sheets
Marketing Data

See Kusi in Action

Production-ready platform with enterprise-grade monitoring and real-time observability

Dashboard Overview
Live
Total Jobs
12
+3 this week
Success Rate
94.2%
+2.1% vs last month
Rows Processed
1.2M
Last 24 hours
Active Jobs
3
Running now

Recent Job Runs

Job NameStatusRowsDurationStarted
Salesforce → PostgreSQL
Contacts sync
Running1,247 / 2,50000:02:342 min ago
Google Sheets → BigQuery
Marketing data
Success8,43200:01:1215 min ago
Typeform → PostgreSQL
Survey responses
Success23400:00:451 hour ago
MySQL → Snowflake
Orders data
Failed000:00:082 hours ago
API → PostgreSQL
Customer events
Success15,67800:03:213 hours ago

Real-time monitoring with WebSocket updates, job metrics, and system health

Real-Time Updates

Watch jobs execute row-by-row with WebSocket monitoring

Comprehensive Metrics

Track success rates, performance, and credit consumption

Data Observability

Monitor freshness, detect drift, and ensure data quality

Everything you need for data synchronization

Powerful features to automate your data pipelines with confidence

Multi-Source Integration

Connect to PostgreSQL, MySQL, Redshift, Snowflake, BigQuery, Google Sheets, Salesforce, Typeform, REST APIs, CSV, Excel, and more. SSH tunnel support for secure connections to private databases.

20+ data sourcesSSH tunnel supportCustom adapters available

Real-Time Monitoring

Track job execution with live WebSocket updates showing row-by-row progress. Monitor performance metrics, credit consumption, and system health from a comprehensive dashboard with detailed analytics.

WebSocket updatesPerformance metricsCredit tracking

Enterprise Security

Production-grade security with JWT authentication, optional 2FA (TOTP), comprehensive audit logs, role-based access control (RBAC), security headers, and rate limiting.

JWT + 2FARBACAudit logs

Automated Scheduling

Schedule jobs with flexible cron expressions (every minute to yearly). Execute multiple tasks in parallel with automatic retry, dependency management, field mapping, and conflict detection.

Cron schedulingParallel executionAuto-retry logic

How Data Flows Through Kusi

Understand the complete journey of your data from source to destination

1

Data Sources

PostgreSQL, MySQL, Redshift
Salesforce, BigQuery, Snowflake
Google Sheets, CSV, Excel
REST APIs, Typeform
2

Kusi Engine

Extract Data
Fetch from source with filters
Transform
Map fields, validate schema
Load
Write to destination with retry
Real-time Monitoring
3

Data Destinations

Data Warehouses
Cloud Databases
Spreadsheets & Files
APIs & Webhooks

Flexible Transfer Modes

Full Sync

Replace all data at destination with fresh copy from source. Perfect for snapshots and reports.

Incremental

Only sync new or modified records based on timestamp. Minimal credit usage for large datasets.

Upsert

Insert new records and update existing ones based on primary key. Best for maintaining live data.

Enterprise-Grade Data Pipeline Features

Auto Retry
Automatic retries with exponential backoff
SSH Tunnels
Secure connections to private databases
Field Mapping
Custom transformations and data type conversion
Parallel Tasks
Multiple tasks run simultaneously for speed

Understanding Jobs & Tasks

Learn how Kusi organizes and executes your data synchronization workflows

What is a Job?

A Job is a collection of related data synchronization tasks that run together on a schedule. Think of it as a workflow container that orchestrates multiple data movements.

Scheduled Execution

Jobs run on cron schedules (e.g., every hour, daily at 2am, or manually triggered)

Multiple Tasks

Each job contains 1 or more tasks that can run in parallel or sequentially

Run History

Every execution creates a Job Run record with metrics and logs

What is a Task?

A Task is a single data transfer operation from one source to one destination. Tasks define what data to move, how to transform it, and where it goes.

Task Components:

Data Source
Which connection to pull from (e.g., Salesforce Account table)
Data Destination
Where to write data (e.g., BigQuery “accounts” table)
Field Mappings
How source fields map to destination (e.g., “sf_id” → “salesforce_id”)

Task Settings:

Transfer Mode
Full sync, incremental, or upsert
Filters & Queries
SQL WHERE clauses or API filters to limit data
Execution Order
Run tasks in parallel or set dependencies

Job Execution Lifecycle

1. Scheduled
Celery Beat triggers job based on cron schedule
2. Queued
Job run created and tasks queued in Celery
3. Running
Tasks execute in parallel, fetching and writing data
4. Completed
All tasks finish, metrics recorded, credits deducted
5. Reported
Alerts sent, WebSocket updates, logs persisted

Example: Daily Sales Pipeline Sync

Job: “Sales Data Pipeline”
Schedule: Daily at 2:00 AM UTC
Task 1: Accounts
Source: Salesforce “Account”
Destination: BigQuery “crm.accounts”
Mode: Incremental (last 24h)
Task 2: Opportunities
Source: Salesforce “Opportunity”
Destination: BigQuery “crm.opportunities”
Mode: Incremental (last 24h)
Task 3: Contacts
Source: Salesforce “Contact”
Destination: BigQuery “crm.contacts”
Mode: Incremental (last 24h)

Execution Results:

Total Tasks:3
Execution Time:187 seconds
Rows Synced:12,453
Data Transferred:8.2 MB

Credit Usage:

Rows (12,453 × 0.00001):0.12
Time (187s × 0.001):0.19
Runs (3 × 0.1):0.30
Data (8.2 MB × ~0.08):0.65
Total Credits:1.26

Parallel Execution

Tasks within a job run simultaneously for maximum speed and efficiency

Smart Failure Handling

Individual task failures don't stop other tasks. Auto-retry with exponential backoff.

Real-Time Progress

WebSocket updates show live progress, row counts, and status for each task

Transparent Usage Tracking

Built-in credit system tracks resource consumption. See exactly what each job costs.

Credit Formula = (Rows × 0.00001) + (Seconds × 0.001) + (Bytes × 0.00000001)

Rows Processed

0.00001
per row

Each data row transferred between sources and destinations

100,000 rows = 1 credit

Compute Time

0.001
per second

Total job execution time measured in seconds

1,000 seconds = 1 credit (~16 min)

Data Transferred

0.00000001
per byte

Total data size transferred in bytes

~100 GB = 1 credit

Example: Salesforce to BigQuery Sync

Job Details:

Rows processed:50,000 rows
Execution time:120 seconds
Data transferred:5 MB

Credit Calculation:

Rows (50,000 × 0.00001):0.5
Time (120 × 0.001):0.12
Data (5,242,880 × 0.00000001):0.05
Total Credits:0.67 credits

Resource Tracking

Automatic tracking of rows processed, compute time, and data transferred per job.

Cost Visibility

See exactly what each job costs in credits before and after execution.

Usage Analytics

Built-in dashboard shows credit consumption trends and top resource consumers.

Powerful API Integration

Trigger jobs, monitor progress, and manage data pipelines programmatically with our REST API

Get Started with API Keys

Generate API keys from your dashboard to authenticate requests. Keys can be scoped to specific organizations and permissions.

cURL

curl -X POST http://localhost:8000/api/v1/jobs/run \
  -H "Authorization: Bearer your-api-key-here" \
  -H "Content-Type: application/json" \
  -d '{
    "job_id": "your-job-id",
    "trigger": "manual"
  }'

Python

import requests

api_key = "your-api-key-here"
base_url = "http://localhost:8000/api/v1"

# Trigger a job
response = requests.post(
    f"{base_url}/jobs/run",
    headers={"Authorization": f"Bearer {api_key}"},
    json={"job_id": "your-job-id", "trigger": "manual"}
)

print(f"Job started: {response.json()}")

JavaScript

const apiKey = "your-api-key-here";
const baseUrl = "http://localhost:8000/api/v1";

// Trigger a job
const response = await fetch(`${baseUrl}/jobs/run`, {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${apiKey}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    job_id: "your-job-id",
    trigger: "manual"
  })
});

const data = await response.json();
console.log("Job started:", data);

Complete API Coverage

Job Management

Create, update, delete, and trigger jobs programmatically

Real-Time Status

Query job run status, progress, and metrics via REST or WebSocket

Connection CRUD

Manage data sources and destinations through the API

Usage Analytics

Track credit consumption, performance metrics, and execution history

Full OpenAPI specification at http://localhost:8000/api/v1/openapi.json

Interactive docs available at /docs (Swagger) and /redoc

Webhook Notifications

Configure webhooks to receive real-time notifications when jobs complete, fail, or hit specific thresholds.

Job Events

job.started, job.completed, job.failed, job.cancelled

Alert Triggers

Threshold alerts, credit warnings, connection failures

Retry Logic

Exponential backoff with configurable retry attempts

Ready to automate your data pipelines?

Join the limited beta and get free credits to test Kusi with your real workflows. Early adopters get priority support and special pricing.

Limited beta spots · Free credits for testing · Priority support included