Skip to main content

Overview

Tool Name

process_scheduler_tools

Purpose

The process_scheduler_tools enable scheduling and automation of recurring tasks within Genesis Data Agents. Create scheduled jobs that execute prompts, trigger workflows, run data pipelines, and perform automated operations at specified intervals or times. Perfect for ETL automation, report generation, monitoring tasks, data refreshes, and any operation that needs to run on a schedule.

Functions Available

  1. scheduler_action: Manage scheduled jobs including creation, execution, monitoring, and history tracking with support for cron, interval, and one-time scheduling.

Key Features

Flexible Scheduling

Support for cron expressions, interval-based scheduling, and one-time execution dates.

Prompt Execution

Schedule natural language prompts to be executed by specified bots at scheduled times.

Job Management

Create, list, pause, resume, modify, and delete scheduled jobs with full lifecycle control.

Execution History

Track job run history with timestamps, outputs, and execution status for auditing.

Manual Triggers

Execute scheduled jobs on-demand without waiting for the next scheduled time.

Error Handling

Automatic retry logic, failure notifications, and detailed error logging.

Input Parameters for Each Function

scheduler_action

Parameters
NameDefinitionFormat
actionScheduler operation to perform. Values: STATUS, ADD_JOB, LIST_JOBS, GET_JOB, REMOVE_JOB, PAUSE_JOB, RESUME_JOB, RUN_JOB, MODIFY_JOB, GET_HISTORY, GET_RUN, CLEAR_HISTORY, KILL_RUN.String (required)
job_idUnique identifier for the scheduled job. Required for most job-specific operations.String
what_to_do_promptNatural language prompt describing the task to execute. This is the ACTUAL TASK, not a scheduling request.String
triggerScheduling configuration as JSON object. Types: cron, interval, date.Object
job_bot_idBot ID that will execute the scheduled job (defaults to current bot).String
job_thread_idThread ID for job execution context (optional).String
nameHuman-readable name for the job (optional).String
coalesceWhether to run missed jobs as single execution (default: False).Boolean
max_instancesMaximum concurrent instances of job (default: 1).Integer
misfire_grace_timeSeconds to wait before considering job misfired (default: None).Integer
run_idSpecific run identifier for retrieving execution details or killing runs.String
limitMaximum number of history records to return (default: 100).Integer
offsetNumber of history records to skip for pagination (default: 0).Integer
The what_to_do_prompt should describe the actual task to perform, not the scheduling itself. For example: “Check for new data and update the dashboard” NOT “Schedule a check every 5 minutes”.

Use Cases

  1. Automated Data Refreshes Schedule regular data extracts, transformations, and loads to keep dashboards and reports current.
  2. Report Generation Generate and distribute daily, weekly, or monthly reports automatically at specified times.
  3. System Monitoring Check system health, data quality, pipeline status, or SLA compliance at regular intervals.
  4. Data Quality Checks Run validation jobs to detect anomalies, missing data, or schema changes on a schedule.
  5. Batch Processing Schedule resource-intensive operations during off-peak hours to optimize system performance.

Workflow/How It Works

  1. Step 1: Check Scheduler Status Verify scheduler is running and get statistics:
    scheduler_action(action="STATUS")
    
  2. Step 2: Create Cron-Based Job Schedule a job using cron expression (runs every day at 9 AM):
    scheduler_action(
        action="ADD_JOB",
        job_id="daily_sales_report",
        name="Daily Sales Report Generator",
        what_to_do_prompt="Generate sales report for yesterday and send to the team",
        trigger={
            "type": "cron",
            "hour": 9,
            "minute": 0
        }
    )
    
  3. Step 3: Create Interval-Based Job Schedule a job to run every 30 minutes:
    scheduler_action(
        action="ADD_JOB",
        job_id="data_freshness_check",
        name="Data Freshness Monitor",
        what_to_do_prompt="Check if source data has been updated in the last hour and alert if stale",
        trigger={
            "type": "interval",
            "minutes": 30
        }
    )
    
  4. Step 4: Create One-Time Job Schedule a job for specific date/time:
    scheduler_action(
        action="ADD_JOB",
        job_id="quarterly_analysis",
        name="Q1 2024 Analysis",
        what_to_do_prompt="Perform comprehensive Q1 analysis and generate executive summary",
        trigger={
            "type": "date",
            "run_date": "2024-04-01T08:00:00"
        }
    )
    
  5. Step 5: List All Jobs View all scheduled jobs:
    scheduler_action(action="LIST_JOBS")
    
  6. Step 6: Get Job Details Retrieve specific job configuration:
    scheduler_action(
        action="GET_JOB",
        job_id="daily_sales_report"
    )
    
  7. Step 7: Run Job Manually Trigger immediate execution:
    scheduler_action(
        action="RUN_JOB",
        job_id="daily_sales_report"
    )
    
  8. Step 8: View Execution History Check job run history:
    scheduler_action(
        action="GET_HISTORY",
        job_id="daily_sales_report",
        limit=10
    )
    
  9. Step 9: Pause/Resume Jobs Temporarily disable or re-enable jobs:
    # Pause job
    scheduler_action(
        action="PAUSE_JOB",
        job_id="data_freshness_check"
    )
    
    # Resume job
    scheduler_action(
        action="RESUME_JOB",
        job_id="data_freshness_check"
    )
    
  10. Step 10: Modify Existing Job Update job configuration:
    scheduler_action(
        action="MODIFY_JOB",
        job_id="daily_sales_report",
        trigger={
            "type": "cron",
            "hour": 8,
            "minute": 30
        }
    )
    

Integration Relevance

  • data_connector_tools to schedule automated data extraction and loading operations.
  • airflow_tools to trigger Airflow DAGs on schedule or as part of workflow orchestration.
  • dbt_action to schedule dbt model runs for regular data transformations.
  • github_connector_tools / gitlab_connector_tools to schedule automated commits or deployments.
  • slack_tools to send scheduled notifications, reports, or alerts.
  • project_manager_tools to schedule task execution and mission progress updates.

Configuration Details

  • Scheduler Type: APScheduler with persistent job store (survives restarts).
  • Time Zone: Jobs run in UTC by default; specify timezone in cron/date triggers.
  • Persistence: Job definitions and history stored in database.
  • Concurrency: Configurable max_instances per job (default: 1).
  • Missed Jobs: Coalesce option determines if missed jobs run once or skipped.
  • Grace Period: misfire_grace_time allows late job execution within window.
  • Job Storage: Job run outputs and logs retained based on retention policy.

Trigger Types

Cron Trigger

Schedule jobs using cron-like expressions:
{
    "type": "cron",
    "hour": 9,              # Hour (0-23)
    "minute": 0,            # Minute (0-59)
    "day_of_week": "mon",   # Day of week (mon, tue, wed, thu, fri, sat, sun or 0-6)
    "day": 1,               # Day of month (1-31)
    "month": "*",           # Month (1-12)
    "timezone": "UTC"       # Optional timezone
}
Cron Examples:
  • Every day at 9 AM: {"type": "cron", "hour": 9, "minute": 0}
  • Every Monday at 8:30 AM: {"type": "cron", "hour": 8, "minute": 30, "day_of_week": "mon"}
  • First day of every month: {"type": "cron", "hour": 6, "minute": 0, "day": 1}
  • Every hour: {"type": "cron", "minute": 0}
  • Every 15 minutes: {"type": "cron", "minute": "*/15"}

Interval Trigger

Schedule jobs at regular intervals:
{
    "type": "interval",
    "seconds": 30,      # Optional: interval in seconds
    "minutes": 5,       # Optional: interval in minutes
    "hours": 2,         # Optional: interval in hours
    "days": 1           # Optional: interval in days
}
Interval Examples:
  • Every 30 seconds: {"type": "interval", "seconds": 30}
  • Every 5 minutes: {"type": "interval", "minutes": 5}
  • Every 2 hours: {"type": "interval", "hours": 2}
  • Every day: {"type": "interval", "days": 1}

Date Trigger

Schedule one-time job execution:
{
    "type": "date",
    "run_date": "2024-04-01T08:00:00"  # ISO format date/time
}
For cron triggers, be mindful of timezone settings. All times default to UTC unless explicitly specified. Convert local times to UTC or specify timezone in trigger configuration.

Limitations or Notes

  1. Execution Environment: Jobs execute in bot context with access to bot’s tools and permissions.
  2. Long-Running Jobs: Jobs with execution time > 5 minutes may be terminated (configure timeout if needed).
  3. Concurrency: max_instances controls how many copies of a job can run simultaneously.
  4. Missed Executions: If scheduler is down during scheduled time, coalesce determines behavior.
  5. History Retention: Job run history retained for configurable period (default: 90 days).
  6. Job Limits: Recommended maximum of 100 active jobs per bot for performance.
  7. Prompt Complexity: Keep what_to_do_prompt clear and focused; complex multi-step operations may need breaking down.
  8. Thread Context: If job_thread_id not specified, each run creates new thread context.

Supported Actions

STATUS - Get scheduler status and statistics
ADD_JOB - Create new scheduled job
LIST_JOBS - List all scheduled jobs
GET_JOB - Get specific job details
REMOVE_JOB - Delete scheduled job
PAUSE_JOB - Temporarily disable job
RESUME_JOB - Re-enable paused job
RUN_JOB - Execute job immediately
MODIFY_JOB - Update job configuration
GET_HISTORY - View job execution history
GET_RUN - Get specific run details and output
CLEAR_HISTORY - Delete old history records
KILL_RUN - Terminate running job execution

Not Supported

❌ Sub-second scheduling (minimum interval: 1 second)
❌ Conditional triggers based on external events (use separate monitoring job)
❌ Cross-bot job dependencies (implement in job prompt logic)
❌ Job priority or queue management
❌ Distributed job execution across multiple scheduler instances
❌ Job chaining or workflow DAGs (use Airflow for complex workflows)
❌ Dynamic schedule modification during execution

Output

  • ADD_JOB: Job ID, next run time, and creation confirmation.
  • LIST_JOBS: Table of jobs with ID, name, trigger type, next run time, and status.
  • GET_JOB: Complete job configuration including trigger, bot assignment, and metadata.
  • GET_HISTORY: List of execution records with run ID, start time, end time, status, and output summary.
  • GET_RUN: Complete run details including full output, error messages, and execution context.
  • RUN_JOB: Run ID and confirmation of manual execution trigger.
  • STATUS: Scheduler health, active jobs count, running jobs, and pending executions.
  • Errors: Detailed error messages with troubleshooting guidance.

Best Practices

Clear Prompts

Write specific, actionable prompts. Good: “Query sales table and email summary”. Bad: “Do sales stuff”.

Appropriate Intervals

Don’t over-schedule. Balance freshness needs with system load. Consider impact of frequent executions.

Error Handling

Include error handling instructions in prompts: “If query fails, log error and notify team”.

Monitor History

Regularly review execution history to catch failures, performance issues, or unexpected behavior.

Use Descriptive Names

Name jobs clearly to indicate purpose. Makes management and troubleshooting easier.

Test Before Scheduling

Run jobs manually first to verify they work correctly before setting them on a schedule.

Example: Complete Scheduling Workflow

# Step 1: Check scheduler is running
status = scheduler_action(action="STATUS")
print(f"Scheduler status: {status}")

# Step 2: Create daily data refresh job (runs at 2 AM UTC)
scheduler_action(
    action="ADD_JOB",
    job_id="daily_customer_refresh",
    name="Daily Customer Data Refresh",
    what_to_do_prompt="""
Execute the following data refresh workflow:
1. Query the customer database for updates since last run
2. Transform data according to business rules
3. Load into analytics warehouse
4. Run data quality checks
5. If any issues found, send alert to #data-ops Slack channel
6. On success, update last_run timestamp
""",
    trigger={
        "type": "cron",
        "hour": 2,
        "minute": 0
    },
    coalesce=True,
    max_instances=1
)

# Step 3: Create frequent monitoring job (every 5 minutes)
scheduler_action(
    action="ADD_JOB",
    job_id="pipeline_health_check",
    name="Pipeline Health Monitor",
    what_to_do_prompt="""
Check health of all active data pipelines:
1. Query pipeline_status table for jobs running longer than expected
2. Check for any jobs in 'failed' state in the last hour
3. Verify data freshness for critical tables (< 2 hours old)
4. If any issues detected, send alert with details
5. Log check results to monitoring table
""",
    trigger={
        "type": "interval",
        "minutes": 5
    },
    max_instances=1
)

# Step 4: Create weekly report job (every Monday at 9 AM)
scheduler_action(
    action="ADD_JOB",
    job_id="weekly_executive_report",
    name="Weekly Executive Report",
    what_to_do_prompt="""
Generate weekly executive report:
1. Calculate KPIs for previous week (revenue, customers, growth metrics)
2. Generate trend charts for key metrics
3. Create summary document with insights
4. Save report to shared drive
5. Send email notification with report link to executive team
6. Post summary to #exec-updates Slack channel
""",
    trigger={
        "type": "cron",
        "hour": 9,
        "minute": 0,
        "day_of_week": "mon"
    }
)

# Step 5: Create end-of-month job (runs on the 1st at midnight)
scheduler_action(
    action="ADD_JOB",
    job_id="monthly_close_process",
    name="Monthly Financial Close",
    what_to_do_prompt="""
Execute month-end financial close process:
1. Run month-end reconciliation queries
2. Generate financial statements
3. Calculate month-over-month variance analysis
4. Archive previous month data
5. Initialize new month tracking tables
6. Send completion notification to finance team
""",
    trigger={
        "type": "cron",
        "hour": 0,
        "minute": 0,
        "day": 1
    },
    max_instances=1
)

# Step 6: List all scheduled jobs
jobs = scheduler_action(action="LIST_JOBS")
print(f"Total scheduled jobs: {len(jobs['jobs'])}")
for job in jobs['jobs']:
    print(f"- {job['name']}: next run at {job['next_run_time']}")

# Step 7: Run a job manually for testing
print("\\nTesting pipeline health check...")
run_result = scheduler_action(
    action="RUN_JOB",
    job_id="pipeline_health_check"
)
print(f"Manual run triggered: {run_result['run_id']}")

# Wait for execution to complete
import time
time.sleep(5)

# Step 8: Check execution results
run_details = scheduler_action(
    action="GET_RUN",
    run_id=run_result['run_id']
)
print(f"Run status: {run_details['status']}")
print(f"Output: {run_details['output']}")

# Step 9: View execution history for a job
history = scheduler_action(
    action="GET_HISTORY",
    job_id="pipeline_health_check",
    limit=5
)
print(f"\\nLast 5 executions of pipeline health check:")
for run in history['runs']:
    print(f"- {run['start_time']}: {run['status']} ({run['duration']}s)")

# Step 10: Modify job schedule (change to every 10 minutes instead of 5)
scheduler_action(
    action="MODIFY_JOB",
    job_id="pipeline_health_check",
    trigger={
        "type": "interval",
        "minutes": 10
    }
)
print("Updated health check to run every 10 minutes")

# Step 11: Pause job temporarily (e.g., during maintenance)
scheduler_action(
    action="PAUSE_JOB",
    job_id="daily_customer_refresh"
)
print("Paused daily refresh job for maintenance")

# Later: Resume the job
scheduler_action(
    action="RESUME_JOB",
    job_id="daily_customer_refresh"
)
print("Resumed daily refresh job")

# Step 12: Clean up old history (keep last 30 days)
scheduler_action(
    action="CLEAR_HISTORY",
    job_id="pipeline_health_check",
    older_than_days=30
)
print("Cleared history older than 30 days")

Advanced Features

Conditional Execution

Implement conditional logic in job prompts:
scheduler_action(
    action="ADD_JOB",
    job_id="smart_refresh",
    name="Conditional Data Refresh",
    what_to_do_prompt="""
Execute smart data refresh:
1. Check if source data has been updated since last run
2. If no new data, skip processing and log skip reason
3. If new data exists:
   a. Calculate approximate processing time based on data volume
   b. If > 10k new records, send notification that large batch is processing
   c. Execute refresh
   d. Validate results
   e. Send completion summary
""",
    trigger={
        "type": "interval",
        "minutes": 15
    }
)

Multi-Step Workflows

Chain multiple operations in a single scheduled job:
scheduler_action(
    action="ADD_JOB",
    job_id="etl_pipeline",
    name="Complete ETL Pipeline",
    what_to_do_prompt="""
Execute complete ETL pipeline:

EXTRACT:
1. Pull data from production database (last 24 hours)
2. Extract data from API endpoints
3. Download files from S3 bucket

TRANSFORM:
4. Validate data quality (check for nulls, duplicates, schema)
5. Apply business logic transformations
6. Aggregate metrics by date and region
7. Enrich with reference data

LOAD:
8. Load to staging tables
9. Run merge to production tables
10. Update materialized views
11. Refresh BI tool cache

VALIDATE & NOTIFY:
12. Run post-load validation queries
13. Compare record counts before/after
14. Generate execution summary
15. Send success notification with metrics
16. If any step fails, rollback and alert immediately
""",
    trigger={
        "type": "cron",
        "hour": 3,
        "minute": 0
    },
    max_instances=1
)

Dynamic Scheduling

Create jobs programmatically based on configuration:
# Define multiple similar jobs from configuration
data_sources = [
    {"name": "customers", "table": "dim_customers", "hour": 1},
    {"name": "orders", "table": "fact_orders", "hour": 2},
    {"name": "products", "table": "dim_products", "hour": 3},
    {"name": "inventory", "table": "fact_inventory", "hour": 4}
]

for source in data_sources:
    scheduler_action(
        action="ADD_JOB",
        job_id=f"refresh_{source['name']}",
        name=f"{source['name'].title()} Data Refresh",
        what_to_do_prompt=f"""
Refresh {source['name']} data:
1. Query source system for {source['name']} updates
2. Stage in temp table
3. Merge to {source['table']}
4. Log row counts and execution time
5. Update metadata table with refresh timestamp
""",
        trigger={
            "type": "cron",
            "hour": source['hour'],
            "minute": 0
        }
    )
    print(f"Created refresh job for {source['name']}")

Error Recovery Jobs

Create monitoring jobs that check for and recover from failures:
scheduler_action(
    action="ADD_JOB",
    job_id="error_recovery",
    name="Automatic Error Recovery",
    what_to_do_prompt="""
Check for and recover from pipeline errors:
1. Query error_log table for failures in last hour
2. For each failed job:
   a. Check if error is transient (connection timeout, lock, etc.)
   b. If transient, attempt retry with exponential backoff
   c. If persistent, escalate to on-call engineer
   d. Log recovery attempt and outcome
3. Update error_log with recovery status
4. If multiple failures of same type, create incident ticket
""",
    trigger={
        "type": "interval",
        "minutes": 15
    }
)

Scheduled Cleanup Jobs

Maintain system health with automated cleanup:
scheduler_action(
    action="ADD_JOB",
    job_id="nightly_cleanup",
    name="Nightly System Cleanup",
    what_to_do_prompt="""
Perform system maintenance and cleanup:
1. Archive logs older than 90 days
2. Delete temporary tables created > 7 days ago
3. Vacuum and analyze frequently updated tables
4. Clear cached query results older than 24 hours
5. Remove orphaned files from staging areas
6. Compact transaction log files
7. Generate cleanup summary report
8. Log space reclaimed and performance improvements
""",
    trigger={
        "type": "cron",
        "hour": 1,
        "minute": 30
    }
)

Monitoring & Alerting

Track Job Health

def monitor_scheduled_jobs():
    """Monitor health of all scheduled jobs"""
    # Get all jobs
    jobs = scheduler_action(action="LIST_JOBS")
    
    issues = []
    
    for job in jobs['jobs']:
        # Get recent history
        history = scheduler_action(
            action="GET_HISTORY",
            job_id=job['job_id'],
            limit=5
        )
        
        # Check for consecutive failures
        recent_runs = history['runs']
        if len(recent_runs) >= 3:
            if all(run['status'] == 'failed' for run in recent_runs[:3]):
                issues.append({
                    'job_id': job['job_id'],
                    'issue': 'Three consecutive failures',
                    'severity': 'high'
                })
        
        # Check for long-running jobs
        for run in recent_runs:
            if run['status'] == 'running' and run['duration'] > 3600:
                issues.append({
                    'job_id': job['job_id'],
                    'issue': f'Job running for {run["duration"]}s',
                    'severity': 'medium'
                })
        
        # Check for stale jobs (not run in expected timeframe)
        if job['last_run_time']:
            from datetime import datetime
            last_run = datetime.fromisoformat(job['last_run_time'])
            now = datetime.now()
            hours_since = (now - last_run).total_seconds() / 3600
            
            # For hourly jobs, alert if no run in 3 hours
            if 'interval' in job['trigger'] and hours_since > 3:
                issues.append({
                    'job_id': job['job_id'],
                    'issue': f'No execution in {hours_since:.1f} hours',
                    'severity': 'medium'
                })
    
    return issues

# Run monitoring
issues = monitor_scheduled_jobs()
if issues:
    print(f"Found {len(issues)} issues:")
    for issue in issues:
        print(f"⚠️ {issue['job_id']}: {issue['issue']} [{issue['severity']}]")
else:
    print("✅ All scheduled jobs healthy")

Troubleshooting

  • Verify job is not paused (check status with GET_JOB)
  • Check scheduler status (use STATUS action)
  • Verify trigger configuration is correct
  • Check for timezone mismatches (cron times are UTC by default)
  • Review misfire_grace_time setting
  • Check if max_instances limit is reached
  • Review error message in GET_RUN output
  • Verify what_to_do_prompt is valid and clear
  • Check if bot has required permissions and tools
  • Test prompt manually before scheduling
  • Review job execution history for patterns
  • Ensure bot_id specified is valid and active
  • Review full output using GET_RUN action
  • Check if prompt is ambiguous or unclear
  • Verify input data and assumptions are correct
  • Test prompt manually to reproduce issue
  • Add more specific instructions to prompt
  • Review execution context and variables
  • Check if scheduler was running during scheduled time
  • Review coalesce setting (determines missed job behavior)
  • Check system logs for scheduler downtime
  • Verify misfire_grace_time is appropriate
  • Consider using interval triggers for critical jobs
  • Monitor scheduler health proactively
  • Verify job_id is unique (not already in use)
  • Check trigger configuration format is valid
  • Ensure what_to_do_prompt is provided
  • Verify bot_id exists if specified
  • Check for scheduler capacity limits
  • Review error message for specific validation failures
  • Check if job is actually stuck or just slow
  • Review GET_RUN output for progress
  • Use KILL_RUN to terminate if truly stuck
  • Optimize prompt to break into smaller tasks
  • Consider increasing timeout if legitimate long runtime
  • Add progress logging to identify bottlenecks
  • Allow time for execution to complete
  • Check if job actually executed (verify next_run_time updated)
  • Verify run wasn’t cleared by CLEAR_HISTORY
  • Check offset/limit parameters for pagination
  • Review retention policy settings
  • Query with larger limit to see all history

Scheduler Architecture

Understanding the scheduler components:

Key Components

  • Scheduler Engine: Core APScheduler managing job lifecycle and execution timing
  • Job Store: Persistent storage for job definitions and configurations
  • Executor: Thread pool that runs scheduled jobs
  • Trigger Conditions: Cron, interval, or date-based scheduling logic
  • Run History: Database storing execution records, outputs, and status
  • Bot Instance: Target bot that executes the what_to_do_prompt

Performance Considerations

Avoid Over-Scheduling

Too many frequent jobs can overwhelm system. Use appropriate intervals based on actual needs.

Optimize Prompts

Keep prompts focused and efficient. Break complex operations into separate jobs if needed.

Monitor Resource Usage

Track job execution times and resource consumption. Optimize or reschedule resource-intensive jobs.

Clean History Regularly

Use CLEAR_HISTORY periodically to prevent history table bloat and maintain performance.

Comparison: Scheduling Options

FeatureProcess SchedulerAirflowCron (System)
Setup Complexity✅ Simple⚠️ Complex✅ Simple
Scheduling Options✅ Cron, interval, date✅ Cron-based✅ Cron only
Job Dependencies⚠️ In prompt logic✅ Native DAG support❌ Manual
Execution History✅ Built-in database✅ Metadata database⚠️ Logs only
UI Management✅ API-based✅ Web UI⚠️ Config files
Monitoring✅ GET_HISTORY API✅ Rich monitoring⚠️ Manual log checking
Error Handling✅ Captured in history✅ Retry logic⚠️ Manual
Use CaseSimple scheduled tasksComplex workflowsSystem maintenance
Use process_scheduler_tools for straightforward scheduled tasks within Genesis. For complex multi-step workflows with dependencies, consider airflow_tools. For simple system maintenance, native cron may suffice.

Migration Guide

From Cron to Process Scheduler

# Old cron entry
0 9 * * * /path/to/script.sh

# New scheduled job
scheduler_action(
    action="ADD_JOB",
    job_id="daily_task",
    what_to_do_prompt="Execute the daily task script",
    trigger={"type": "cron", "hour": 9, "minute": 0}
)

Benefits of Migration

  • ✅ Execution history and monitoring
  • ✅ Error handling and retry logic
  • ✅ Easy modification without server access
  • ✅ Centralized job management
  • ✅ No need for script files on server
I