Migration

Migration Guide

Migrate from IBM Guardium or Imperva to DB Audit with minimal disruption. This guide covers policy conversion, parallel deployment, and validation.

Why Organizations Migrate

Organizations migrate from legacy DAM solutions to DB Audit for faster deployment, lower costs, and better cloud support.

Capability Guardium Imperva DB Audit
Deployment Time Weeks to months Days to weeks Minutes to hours
Architecture Agent-based Gateway/Agent Agentless
Database Impact 3-15% overhead 2-10% overhead Zero overhead
Cloud Support Limited Partial Native
ML Detection Add-on Basic Built-in
Pricing Model Per server Per server Per event volume
Infrastructure Dedicated servers Appliances Cloud-native

Migration Process

1

Assessment & Planning

Inventory your current DAM setup and plan the migration.

  • Document all monitored databases and their configurations
  • Export existing audit policies and rules
  • Identify critical alerts and reports
  • Map user roles and permissions
  • Determine data retention requirements
2

Parallel Deployment

Deploy DB Audit alongside your existing solution.

  • Install DB Audit collectors
  • Connect to the same databases
  • Import or recreate audit policies
  • Configure alerting channels
  • Run both systems in parallel
3

Validation

Verify DB Audit captures all required events.

  • Compare event counts between systems
  • Validate alert triggering
  • Test compliance reports
  • Verify data classification accuracy
  • Confirm SIEM integration
4

Cutover

Transition to DB Audit as primary and decommission legacy.

  • Update runbooks and procedures
  • Redirect alerts to DB Audit
  • Train team on new interface
  • Decommission legacy agents
  • Archive legacy data per retention policy

Policy Conversion

Convert your existing audit policies to DB Audit format. We provide migration tools and examples for common policy patterns.

From IBM Guardium

# Guardium Policy Export
# Export your policies from Guardium:
# Administration > Policy Builder > Export

# Guardium policy structure
<policy name="Privileged User Monitoring">
  <rule>
    <condition field="DB_USER" operator="IN"
               value="root,admin,dba"/>
    <action type="ALERT" severity="HIGH"/>
  </rule>
</policy>

# Equivalent DB Audit policy
# policies.yaml
policies:
  - name: privileged-user-monitoring
    description: Monitor privileged database users
    rules:
      - condition:
          user:
            in: ["root", "admin", "dba"]
        action: alert
        severity: high
    databases: ["*"]
    enabled: true

From Imperva SecureSphere

# Imperva SecureSphere Policy Export
# Export via: Policies > Data Security > Export

# Imperva policy structure (simplified)
{
  "policy_name": "PCI-DSS Cardholder Data",
  "rules": [
    {
      "table_group": "cardholder_data",
      "action": "audit_and_alert",
      "alert_severity": "critical"
    }
  ]
}

# Equivalent DB Audit policy
policies:
  - name: pci-dss-cardholder-data
    description: Monitor access to cardholder data tables
    rules:
      - condition:
          tables:
            in: ["credit_cards", "payment_info", "card_tokens"]
        action: alert
        severity: critical
        classification_required: ["PCI"]
    databases: ["payments_*"]
    enabled: true

Automated Policy Import

Use the DB Audit CLI to import policies from exported files.

# Import policies from JSON export
dbaudit-cli policies import \
  --file guardium-export.json \
  --format guardium \
  --preview  # Show what will be created

# Preview output
Policies to import:
  1. Privileged User Monitoring (3 rules) -> privileged-user-monitoring
  2. After Hours Access (2 rules) -> after-hours-access
  3. Bulk Data Export (1 rule) -> bulk-data-export

Proceed with import? [y/N]

# Import with automatic rule conversion
dbaudit-cli policies import \
  --file guardium-export.json \
  --format guardium \
  --apply

Parallel Deployment

Run DB Audit alongside your existing solution to validate coverage before cutover.

Recommended Parallel Period

Run both systems for 2-4 weeks to validate event capture across different usage patterns and edge cases.

# config.yaml for parallel deployment
collector:
  # Tag events from this collector for comparison
  metadata:
    migration_source: "dbaudit"
    parallel_run: true

  # Optional: Only capture events also captured by legacy
  # (useful for validation phase)
  sampling:
    enabled: false

# During parallel run, compare event counts:
# DB Audit Dashboard > Analytics > Event Volume
# vs
# Guardium Reports > Activity Summary

Migration Validation

Compare event counts and alert triggering between systems to validate migration completeness.

# Validation queries to compare systems

# 1. Compare total event counts (last 24 hours)
# Run on both systems and compare

# DB Audit:
curl -X POST "https://api.dbaudit.ai/v1/events/aggregate" \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
    "metrics": ["count"],
    "filters": {
      "time_range": {"last": "24h"}
    },
    "group_by": ["database_name"]
  }'

# 2. Compare alert counts
curl -X GET "https://api.dbaudit.ai/v1/alerts?\
  start_date=$(date -d '24 hours ago' -Iseconds)&\
  severity=critical,high"

# 3. Validate specific high-value events
# E.g., all DELETE operations should match
curl -X POST "https://api.dbaudit.ai/v1/events/search" \
  -d '{
    "filters": {
      "operation": {"in": ["DELETE", "DROP", "TRUNCATE"]},
      "time_range": {"last": "24h"}
    }
  }'

Validation Checklist

Event counts within 5% tolerance
All critical alerts firing correctly
Compliance reports generating successfully
SIEM integration receiving events
Data classification tags matching
User roles and permissions configured

Historical Data & Archival

DB Audit does not import historical events from legacy systems. Export and archive historical data per your retention policies before decommissioning.

# Export historical data before decommissioning
# Note: DB Audit doesn't import historical events,
# but you can archive for compliance

# Export from Guardium
# Reports > Audit Data > Export to CSV
# Retain per your compliance requirements

# Export from Imperva
# Audit > Reports > Export Historical Data

# Store in compliant archive
aws s3 cp guardium-historical-export/ \
  s3://compliance-archive/dam-migration/guardium/ \
  --recursive \
  --storage-class GLACIER

# Document retention in migration records
Migration completed: 2025-01-15
Historical data archived: s3://compliance-archive/dam-migration/
Retention policy: 7 years (SOX requirement)
Decommission date: 2025-02-15

Common Migration Issues

Missing events compared to legacy

Cause: Database audit logging not fully enabled

Solution: Verify native audit is configured per our database connector guides. Some legacy tools capture via network tap which may see more traffic.

Different event counts

Cause: Legacy tools may count differently (e.g., per statement vs per transaction)

Solution: Compare specific event types rather than totals. Focus on security-relevant events.

Policy conversion errors

Cause: Complex legacy rules may not have direct equivalents

Solution: Review converted policies manually. Some rules may need to be recreated using DB Audit's policy builder.

Alert noise after migration

Cause: Default thresholds may differ from tuned legacy system

Solution: Run in monitor-only mode initially and tune thresholds based on baseline.

Need Migration Help?

Our team can assist with complex migrations, policy conversion, and validation.