CrowdStrike Falcon LogScale MCP Server
Create a powerful Model Context Protocol (MCP) server for CrowdStrike Falcon LogScale (formerly Humio) in minutes with our AI Gateway. This guide walks you through setting up seamless log analytics integration with enterprise-grade security and instant OAuth authentication.
About CrowdStrike Falcon LogScale API
CrowdStrike Falcon LogScale is a modern log management platform that enables real-time observability at scale. It provides lightning-fast search, live streaming queries, and advanced analytics across petabytes of data with no indexing delays.
Key Capabilities
- Real-time Search: Sub-second query results
- Live Queries: Streaming log analysis
- No Indexing: Instant data availability
- Flexible Schema: Dynamic field extraction
- Advanced Analytics: Statistical functions
- Data Compression: 10-20x storage efficiency
- Alerts & Dashboards: Visual monitoring
- API Integration: Comprehensive access
API Features
- Query API: Advanced log search
- Streaming API: Real-time data access
- Ingest API: High-volume data ingestion
- Repository API: Data management
- Dashboard API: Visualization control
- Alert API: Automated monitoring
- Parser API: Log parsing rules
- User API: Access management
What You Can Do with CrowdStrike Falcon LogScale MCP Server
The MCP server transforms LogScale API into a natural language interface, enabling AI agents to:
Log Search & Analysis
-
Query Execution
- "Search for failed login attempts"
- "Find errors in application logs"
- "Show security events from firewall"
- "Analyze HTTP 500 errors"
-
Time-based Analysis
- "Show logs from last hour"
- "Compare today vs yesterday"
- "Find patterns over time"
- "Track hourly trends"
-
Field Extraction
- "Extract IP addresses"
- "Parse JSON fields"
- "Extract error codes"
- "Find user agents"
Advanced Analytics
-
Statistical Analysis
- "Calculate average response time"
- "Show top 10 error types"
- "Count unique users"
- "Compute percentiles"
-
Aggregations
- "Group by status code"
- "Sum bytes transferred"
- "Average request duration"
- "Count by country"
-
Time Series
- "Plot errors over time"
- "Show traffic patterns"
- "Track metric trends"
- "Analyze seasonality"
Real-time Monitoring
-
Live Queries
- "Stream authentication failures"
- "Monitor critical errors live"
- "Watch security events"
- "Track system metrics"
-
Alert Configuration
- "Alert on error spike"
- "Notify on security events"
- "Monitor SLA breaches"
- "Detect anomalies"
-
Dashboard Updates
- "Refresh security dashboard"
- "Update metrics display"
- "Show real-time stats"
- "Monitor KPIs"
Data Management
-
Repository Operations
- "Create security logs repo"
- "Configure retention policy"
- "Set compression level"
- "Manage access rights"
-
Data Ingestion
- "Configure log shipper"
- "Set up data pipeline"
- "Parse incoming logs"
- "Apply transformations"
-
Storage Optimization
- "Analyze storage usage"
- "Optimize compression"
- "Archive old data"
- "Manage quotas"
Parser Configuration
-
Parser Creation
- "Create JSON parser"
- "Build regex extractor"
- "Design CSV parser"
- "Configure syslog parser"
-
Field Mapping
- "Map log fields"
- "Extract timestamps"
- "Parse nested data"
- "Handle multiline logs"
-
Data Enrichment
- "Add GeoIP data"
- "Lookup threat intel"
- "Enrich user info"
- "Tag log sources"
Query Language
-
Search Syntax
- "Use field operators"
- "Apply time filters"
- "Chain functions"
- "Create subqueries"
-
Functions
- "Apply regex matching"
- "Use math functions"
- "String manipulation"
- "Date calculations"
-
Optimization
- "Improve query performance"
- "Use efficient filters"
- "Optimize aggregations"
- "Reduce data scans"
Visualization
-
Chart Creation
- "Create line charts"
- "Build bar graphs"
- "Design heatmaps"
- "Make pie charts"
-
Dashboard Design
- "Build security dashboard"
- "Create ops dashboard"
- "Design executive view"
- "Configure widgets"
-
Report Generation
- "Generate daily reports"
- "Create compliance reports"
- "Export analytics"
- "Schedule delivery"
Integration
-
Data Sources
- "Connect to syslog"
- "Integrate with APIs"
- "Stream from Kafka"
- "Pull from S3"
-
Export Options
- "Export to CSV"
- "Send to webhook"
- "Stream to Kafka"
- "Archive to S3"
-
API Access
- "Query via REST"
- "Stream results"
- "Bulk operations"
- "Manage programmatically"
Prerequisites
- Access to Cequence AI Gateway
- CrowdStrike Falcon LogScale account
- API token or OAuth credentials
- Repository access permissions
Step 1: Create LogScale API Token
1.1 Access LogScale Console
- Log in to Falcon LogScale
- Navigate to your repository
- Go to Settings > API Tokens
1.2 Create API Token
- Click Create Token
- Configure token:
- Name: "AI Gateway LogScale MCP"
- Permissions: Select required permissions
- Expiration: Set as needed
1.3 Select Permissions
Choose permissions:
- Search: Execute queries
- Ingest: Send data
- Dashboard: Manage dashboards
- Alert: Configure alerts
- Parser: Manage parsers
1.4 Save Token
- Click Create
- Copy the API token
- Store securely
Step 2-4: Standard Setup
Follow standard steps to access AI Gateway, find CrowdStrike Falcon LogScale API, and create MCP server.
Step 5: Configure API Endpoints
- Base URL:
https://cloud.humio.com
or your instance URL - Select endpoints:
- Query endpoints
- Repository endpoints
- Dashboard endpoints
- Alert endpoints
- Click Next
Step 6: MCP Server Configuration
- Name: "CrowdStrike Falcon LogScale"
- Description: "Log analytics and SIEM platform"
- Configure production mode
- Click Next
Step 7: Configure Authentication
- Authentication Type: Bearer Token
- Token Header:
Authorization
- Token Prefix:
Bearer
- Enter API Token
- Configure repository settings
Available LogScale API Permissions
Query Permissions
-
Search
- Execute queries
- Access saved searches
- View query history
- Export results
-
Streaming
- Live tail queries
- Real-time alerts
- Continuous monitoring
- Event streaming
Data Management
-
Ingest
- Send log data
- Manage parsers
- Configure pipelines
- Control flow
-
Repository Admin
- Create repositories
- Manage retention
- Configure compression
- Set quotas
Visualization
-
Dashboard
- Create dashboards
- Manage widgets
- Share views
- Export reports
-
Alert
- Configure alerts
- Manage notifications
- Set thresholds
- Track incidents
Recommended Permission Sets
For Analysts:
Search
Dashboard (read)
Alert (read)
For Engineers:
Search
Ingest
Parser
Dashboard
Alert
Step 8-10: Complete Setup
Configure security, choose deployment, and deploy.
Using Your CrowdStrike Falcon LogScale MCP Server
With Claude Desktop
{
"servers": {
"crowdstrike-logscale": {
"url": "your-mcp-server-url",
"auth": {
"type": "bearer",
"token": "your-api-token"
}
}
}
}
Natural Language Commands
- "Search for failed SSH login attempts in the last hour"
- "Show top 10 IP addresses by request count"
- "Create alert for error rate above 5%"
- "Build dashboard for application performance"
- "Analyze security events from WAF logs"
API Integration Example
// Initialize MCP client
const mcpClient = new MCPClient({
serverUrl: 'your-mcp-server-url',
auth: {
type: 'bearer',
token: 'your-api-token'
}
});
// Execute a search query
const searchResults = await mcpClient.crowdstrike.logscale.query({
repository: 'production-logs',
query: `
@type = "access_log"
| status >= 500
| groupBy(status)
| count()
| sort(_count, reverse=true)
`,
start: '1h',
end: 'now'
});
// Create a live query
const liveStream = await mcpClient.crowdstrike.logscale.liveQuery({
repository: 'security-logs',
query: `
@type = "auth_log"
| result = "failed"
| select(timestamp, user, source_ip, reason)
`
});
liveStream.on('event', (event) => {
console.log(`Failed login: ${event.user} from ${event.source_ip}`);
if (event.reason === 'brute_force_detected') {
// Trigger security response
handleBruteForce(event);
}
});
// Advanced analytics query
const analytics = await mcpClient.crowdstrike.logscale.query({
repository: 'application-logs',
query: `
@type = "app_log"
| response_time := parseFloat(response_time_ms)
| bucket(span=5m)
| stats(
avg = avg(response_time),
p50 = percentile(response_time, 50),
p95 = percentile(response_time, 95),
p99 = percentile(response_time, 99),
count = count()
)
| round(avg)
`,
start: '24h',
isLive: false
});
// Create parser
const parser = await mcpClient.crowdstrike.logscale.createParser({
repository: 'custom-logs',
parser: {
name: 'custom_app_parser',
tagFields: ['app_name', 'environment'],
tests: [
{
input: '2025-01-30 10:15:23 [ERROR] app=payment env=prod msg="Payment failed" user=123',
output: {
timestamp: '2025-01-30T10:15:23Z',
level: 'ERROR',
app_name: 'payment',
environment: 'prod',
message: 'Payment failed',
user_id: '123'
}
}
],
script: `
/^(?<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})
\s+\[(?<level>\w+)\]
\s+app=(?<app_name>\w+)
\s+env=(?<environment>\w+)
\s+msg="(?<message>[^"]+)"
\s+user=(?<user_id>\w+)$/
| parseTimestamp("yyyy-MM-dd HH:mm:ss", field=timestamp)
`
}
});
// Create alert
const alert = await mcpClient.crowdstrike.logscale.createAlert({
repository: 'production-logs',
alert: {
name: 'High Error Rate Alert',
description: 'Triggers when error rate exceeds 5%',
query: `
@type = "app_log"
| bucket(span=5m)
| level = "ERROR" | error_count := count()
| join({@type = "app_log" | total_count := count()}, on=_bucket)
| error_rate := (error_count / total_count) * 100
| error_rate > 5
`,
throttleField: '_bucket',
throttleTimeSeconds: 300,
actions: [
{
type: 'webhook',
url: 'https://alerts.company.com/webhook',
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
bodyTemplate: `{
"alert": "High Error Rate",
"rate": "{error_rate}%",
"time": "{_bucket}",
"repository": "{#repo}"
}`
}
]
}
});
// Create dashboard
const dashboard = await mcpClient.crowdstrike.logscale.createDashboard({
repository: 'production-logs',
dashboard: {
name: 'Application Performance Dashboard',
widgets: [
{
title: 'Request Rate',
type: 'timechart',
query: '@type = "access_log" | timechart(span=1m)',
options: {
yAxisScale: 'linear',
interpolation: 'step-after'
}
},
{
title: 'Error Distribution',
type: 'piechart',
query: '@type = "app_log" | level = "ERROR" | groupBy(error_code) | count()'
},
{
title: 'Response Time Percentiles',
type: 'timechart',
query: `
@type = "access_log"
| response_time := parseFloat(response_time_ms)
| timechart(
p50 = percentile(response_time, 50),
p95 = percentile(response_time, 95),
p99 = percentile(response_time, 99),
span=5m
)
`
},
{
title: 'Top Endpoints',
type: 'table',
query: `
@type = "access_log"
| groupBy(endpoint)
| stats(
requests = count(),
avg_time = avg(response_time_ms),
errors = count(status >= 500)
)
| sort(requests, reverse=true)
| head(10)
`
}
]
}
});
// Repository management
const repo = await mcpClient.crowdstrike.logscale.createRepository({
name: 'new-service-logs',
description: 'Logs for new microservice',
retention: {
timeBasedRetention: 30 * 24 * 60 * 60 * 1000, // 30 days in ms
ingestSizeBasedRetention: 1000 * 1024 * 1024 * 1024, // 1TB
storageSizeBasedRetention: 500 * 1024 * 1024 * 1024, // 500GB
compressedByteSize: true
}
});
// Complex correlation query
const correlation = await mcpClient.crowdstrike.logscale.query({
repository: 'security-logs',
query: `
// Find suspicious authentication patterns
@type = "auth_log"
| groupBy([user, source_ip], function=[
{failed := count(result="failed")},
{success := count(result="success")},
{countries := count(country, distinct=true)}
])
| ratio := failed / (success + 1)
| suspicious := ratio > 10 OR countries > 3
| suspicious = true
| join(
{@type = "access_log" | groupBy(user, function=count())},
field=user,
include=[access_count]
)
| sort(failed, reverse=true)
`,
start: '7d'
});
// Export query results
const exportJob = await mcpClient.crowdstrike.logscale.exportQuery({
repository: 'production-logs',
query: '@type = "audit_log" | @timestamp > 1d',
format: 'csv',
fileName: 'audit_logs_export.csv'
});
// Monitor export progress
const exportStatus = await mcpClient.crowdstrike.logscale.getExportStatus({
jobId: exportJob.id
});
Common Use Cases
Security Monitoring
- Threat detection
- Incident investigation
- Compliance auditing
- Forensic analysis
Application Performance
- Error tracking
- Latency analysis
- Usage patterns
- Capacity planning
Infrastructure Monitoring
- System metrics
- Service health
- Resource utilization
- Availability tracking
Business Analytics
- User behavior
- Transaction analysis
- Revenue tracking
- Operational metrics
Security Best Practices
-
API Security:
- Use secure token storage
- Implement token rotation
- Limit token permissions
- Monitor API usage
-
Query Safety:
- Validate query inputs
- Implement query limits
- Monitor resource usage
- Use query timeouts
-
Data Protection:
- Mask sensitive data
- Implement field-level security
- Audit data access
- Encrypt data in transit
Troubleshooting
Common Issues
-
Query Performance
- Use efficient filters early
- Limit time ranges
- Optimize aggregations
- Monitor query costs
-
Ingestion Issues
- Check parser configuration
- Verify data format
- Monitor ingestion rate
- Review error logs
-
Alert Problems
- Test query independently
- Verify webhook endpoints
- Check throttle settings
- Review action configuration
Getting Help
- Documentation: AI Gateway Docs
- Support: support@cequence.ai
- LogScale Docs: library.humio.com