Skip to main content

Google Cloud Logging MCP Server

Create a powerful Model Context Protocol (MCP) server for Google Cloud Logging in minutes with our AI Gateway. This guide walks you through setting up seamless log management with advanced querying and instant API authentication.

About Google Cloud Logging API

Google Cloud Logging is a fully managed service that allows you to store, search, analyze, monitor, and alert on logging data and events. The API provides comprehensive access to log entries, sinks, metrics, and exclusions across your Google Cloud resources.

Key Capabilities

  • Log Collection: Automatic collection from GCP services
  • Custom Logs: Application and system log ingestion
  • Real-time Analysis: Stream processing and alerting
  • Log Router: Route logs to various destinations
  • Log-based Metrics: Create metrics from log data
  • Advanced Queries: Powerful filtering and searching
  • Export Options: BigQuery, Cloud Storage, Pub/Sub
  • Retention Management: Configurable retention periods

API Features

  • RESTful API: Standard Google Cloud API
  • gRPC Support: High-performance streaming
  • OAuth 2.0: Secure authentication
  • Structured Logging: JSON log entries
  • Batch Operations: Efficient bulk writes
  • Real-time Streaming: Tail logs in real-time
  • Query Language: Advanced log filtering
  • Audit Logging: Complete audit trail

What You Can Do with Cloud Logging MCP Server

The MCP server transforms Cloud Logging API into a natural language interface, enabling AI agents to:

Log Management

  • Write Logs

    • "Write application log entry"
    • "Log error with stack trace"
    • "Create structured log entry"
    • "Batch write multiple logs"
  • Read Logs

    • "Show logs from last hour"
    • "Get error logs for service"
    • "Filter logs by severity"
    • "Search logs for pattern"
  • Log Organization

    • "List available logs"
    • "Delete old log entries"
    • "Get log metadata"
    • "Manage log buckets"

Log Queries

  • Basic Queries

    • "Find all error logs"
    • "Show logs from specific service"
    • "Get logs between timestamps"
    • "Filter by resource type"
  • Advanced Queries

    • "Find logs with response time > 1s"
    • "Get logs matching regex pattern"
    • "Aggregate logs by error type"
    • "Cross-resource log correlation"
  • Saved Queries

    • "Save frequent search"
    • "Create query library"
    • "Share queries with team"
    • "Schedule query execution"

Log-based Metrics

  • Create Metrics

    • "Create error rate metric"
    • "Track custom events"
    • "Monitor latency from logs"
    • "Count specific patterns"
  • Metric Management

    • "List log metrics"
    • "Update metric filter"
    • "Delete unused metrics"
    • "Get metric data"
  • Alerting

    • "Alert on error spike"
    • "Set threshold alerts"
    • "Configure notifications"
    • "Create alert policies"

Log Sinks

  • Export Configuration

    • "Export logs to BigQuery"
    • "Stream to Cloud Storage"
    • "Send to Pub/Sub topic"
    • "Route to external system"
  • Sink Management

    • "Create log sink"
    • "Update sink destination"
    • "Filter exported logs"
    • "Monitor sink health"
  • Exclusions

    • "Exclude debug logs"
    • "Filter noisy entries"
    • "Reduce log volume"
    • "Manage costs"

Security & Audit

  • Audit Logs

    • "View admin activity"
    • "Track data access"
    • "Monitor system events"
    • "Export audit trail"
  • Access Control

    • "Grant log access"
    • "Set view permissions"
    • "Manage service accounts"
    • "Configure CMEK"
  • Compliance

    • "Enable access transparency"
    • "Configure retention"
    • "Meet regulatory requirements"
    • "Data residency control"

Monitoring & Analysis

  • Log Analytics

    • "Analyze error patterns"
    • "Track performance trends"
    • "Identify anomalies"
    • "Generate insights"
  • Dashboards

    • "Create log dashboard"
    • "Visualize log data"
    • "Real-time monitoring"
    • "Share with team"
  • Integration

    • "Connect to monitoring"
    • "Link with traces"
    • "Correlate with metrics"
    • "Unified observability"

Prerequisites

  • Access to Cequence AI Gateway
  • Google Cloud Project
  • Cloud Logging API enabled
  • Service account with permissions

Step 1: Configure Cloud Logging API Access

1.1 Enable Cloud Logging API

  1. Go to Google Cloud Console
  2. Navigate to APIs & Services Library
  3. Search for "Cloud Logging API"
  4. Click Enable

1.2 Create Service Account

  1. Go to IAM & Admin Service Accounts
  2. Click Create Service Account
  3. Configure:
    • Name: "AI Gateway Logging"
    • Role: Logging Admin or custom role
  4. Create and download JSON key

1.3 Configure Permissions

Required permissions:

  • logging.logEntries.create - Write logs
  • logging.logEntries.list - Read logs
  • logging.logs.list - List logs
  • logging.sinks.create - Create sinks
  • logging.metrics.create - Create metrics

1.4 Set Up Log Buckets (Optional)

  1. Navigate to Logging Logs Storage
  2. Create custom buckets if needed
  3. Configure retention policies
  4. Set up CMEK if required

Step 2-4: Standard Setup

Follow standard steps to access AI Gateway, find Cloud Logging API, and create MCP server.

Step 5: Configure API Endpoints

  1. Base URL: https://logging.googleapis.com/v2
  2. Project ID: Your GCP project
  3. Resource Names: Configure defaults
  4. Click Next

Step 6: MCP Server Configuration

  1. Name: "Cloud Logging"
  2. Description: "Log management and analysis"
  3. Request Timeout: 60 seconds
  4. Click Next

Step 7: Configure Authentication

  1. Authentication Type: Service Account
  2. Upload service account JSON key
  3. Or configure OAuth 2.0:
    • Client ID and Secret
    • Required scopes
  4. Test connection

Available Cloud Logging API Operations

Entries APIs

  • Log Entries
    • Write entries
    • List entries
    • Tail entries

Logs APIs

  • Log Management
    • List logs
    • Delete logs
    • Get log info

Sinks APIs

  • Log Sinks
    • Create sink
    • Get sink
    • Update sink
    • Delete sink
    • List sinks

Metrics APIs

  • Log Metrics
    • Create metric
    • Get metric
    • Update metric
    • Delete metric
    • List metrics

Exclusions APIs

  • Log Exclusions
    • Create exclusion
    • Get exclusion
    • Update exclusion
    • Delete exclusion

Views APIs

  • Log Views
    • Create view
    • Update view
    • List views
    • Get view

Step 8-10: Complete Setup

Configure security settings, choose deployment options, and deploy your server.

Using Your Cloud Logging MCP Server

With Claude Desktop

{
"servers": {
"cloud-logging": {
"url": "your-mcp-server-url",
"auth": {
"type": "service-account",
"credentials": "base64-encoded-service-account-key"
},
"config": {
"project_id": "your-project-id",
"default_log": "projects/your-project-id/logs/application"
}
}
}
}

Natural Language Commands

  • "Show me all error logs from the API service in the last hour"
  • "Create a sink to export security logs to BigQuery"
  • "Set up an alert for high error rates"
  • "Find logs containing 'database connection failed'"
  • "Create a metric to track API response times from logs"

API Integration Example

// Initialize MCP client
const mcpClient = new MCPClient({
serverUrl: 'your-mcp-server-url',
auth: {
type: 'service-account',
credentials: serviceAccountKey
}
});

// Write log entries
const logEntry = await mcpClient.logging.writeLogEntries({
logName: `projects/${projectId}/logs/application`,
resource: {
type: 'gce_instance',
labels: {
instance_id: '1234567890',
zone: 'us-central1-a'
}
},
entries: [
{
severity: 'ERROR',
jsonPayload: {
message: 'Database connection failed',
error: {
code: 'CONNECTION_TIMEOUT',
details: 'Unable to connect to database after 30s',
stack: 'Error: Connection timeout\n at connect (/app/db.js:42:11)'
},
context: {
user_id: 'user-123',
request_id: 'req-456',
service: 'api-backend',
version: '1.2.3'
}
},
timestamp: new Date().toISOString(),
labels: {
env: 'production',
component: 'database'
},
trace: `projects/${projectId}/traces/abc123`,
spanId: 'def456',
traceSampled: true
}
],
partialSuccess: true
});

// Query logs with advanced filter
const logs = await mcpClient.logging.listLogEntries({
resourceNames: [`projects/${projectId}`],
filter: `
resource.type="k8s_container"
AND resource.labels.namespace_name="production"
AND severity >= ERROR
AND timestamp >= "2025-01-30T00:00:00Z"
AND jsonPayload.response_time_ms > 1000
`,
orderBy: 'timestamp desc',
pageSize: 100
});

console.log(`Found ${logs.entries.length} slow error responses`);

// Create log-based metric
const metric = await mcpClient.logging.createLogMetric({
parent: `projects/${projectId}`,
metric: {
name: 'error_rate',
description: 'Rate of error log entries',
filter: 'severity >= ERROR',
metricDescriptor: {
metricKind: 'DELTA',
valueType: 'INT64',
unit: '1',
labels: [
{
key: 'service',
valueType: 'STRING',
description: 'The service that generated the error'
},
{
key: 'error_code',
valueType: 'STRING',
description: 'The error code'
}
]
},
labelExtractors: {
service: 'EXTRACT(jsonPayload.context.service)',
error_code: 'EXTRACT(jsonPayload.error.code)'
},
valueExtractor: 'EXTRACT(1)'
}
});

console.log(`Created metric: ${metric.name}`);

// Create log sink to BigQuery
const sink = await mcpClient.logging.createSink({
parent: `projects/${projectId}`,
sink: {
name: 'bigquery_analytics_sink',
destination: `bigquery.googleapis.com/projects/${projectId}/datasets/logs_analytics`,
filter: `
resource.type="gae_app"
OR resource.type="k8s_container"
OR resource.type="cloud_function"
`,
description: 'Export application logs to BigQuery for analysis',
exclusions: [
{
name: 'exclude_debug_logs',
description: 'Exclude verbose debug logs',
filter: 'severity < INFO',
disabled: false
}
],
includeChildren: true,
bigqueryOptions: {
usePartitionedTables: true,
usesTimestampColumnPartitioning: true
}
},
uniqueWriterIdentity: true
});

console.log(`Created sink: ${sink.name}`);
console.log(`Grant this identity access to BigQuery: ${sink.writerIdentity}`);

// Create log exclusion to reduce costs
const exclusion = await mcpClient.logging.createExclusion({
parent: `projects/${projectId}`,
exclusion: {
name: 'exclude_health_checks',
description: 'Exclude health check logs to reduce volume',
filter: `
jsonPayload.path="/health"
OR jsonPayload.path="/ready"
OR jsonPayload.user_agent=~".*HealthChecker.*"
`,
disabled: false
}
});

// Tail logs in real-time
const stream = await mcpClient.logging.tailLogEntries({
resourceNames: [`projects/${projectId}`],
filter: 'severity >= WARNING',
bufferWindow: '2s'
});

stream.on('data', (response) => {
response.entries.forEach(entry => {
console.log(`[${entry.severity}] ${entry.timestamp}: ${JSON.stringify(entry.jsonPayload)}`);
});
});

// Analyze logs for patterns
const analysis = await mcpClient.logging.analyzeLogPatterns({
resourceNames: [`projects/${projectId}`],
filter: 'severity = ERROR AND timestamp >= "2025-01-01T00:00:00Z"',
analysisType: 'ERROR_PATTERNS',
timeRange: {
startTime: '2025-01-01T00:00:00Z',
endTime: '2025-01-31T23:59:59Z'
}
});

console.log('\nTop Error Patterns:');
analysis.patterns.forEach(pattern => {
console.log(`Pattern: ${pattern.pattern}`);
console.log(`Count: ${pattern.count}`);
console.log(`Example: ${pattern.exampleEntry.jsonPayload.message}\n`);
});

// Create alert policy for log-based metric
const alertPolicy = await mcpClient.logging.createAlertPolicy({
name: 'High Error Rate Alert',
displayName: 'High Error Rate',
conditions: [{
displayName: 'Error rate exceeds threshold',
conditionThreshold: {
filter: `
metric.type="logging.googleapis.com/user/error_rate"
AND resource.type="k8s_container"
`,
comparison: 'COMPARISON_GT',
thresholdValue: 10,
duration: '60s',
aggregations: [{
alignmentPeriod: '60s',
perSeriesAligner: 'ALIGN_RATE',
crossSeriesReducer: 'REDUCE_SUM',
groupByFields: ['resource.label.namespace_name']
}]
}
}],
notificationChannels: ['projects/myproject/notificationChannels/123'],
alertStrategy: {
autoClose: '1800s' // 30 minutes
}
});

// Structured logging helper
const structuredLog = await mcpClient.logging.log({
severity: 'INFO',
message: 'User action completed',
httpRequest: {
requestMethod: 'POST',
requestUrl: '/api/users/123/update',
requestSize: '1024',
status: 200,
responseSize: '256',
userAgent: 'Mozilla/5.0...',
remoteIp: '192.168.1.1',
latency: '0.125s'
},
operation: {
id: 'operation-123',
producer: 'user-service',
first: false,
last: true
},
labels: {
user_id: 'user-123',
action: 'profile_update'
}
});

// Export logs for compliance
const exportJob = await mcpClient.logging.exportLogs({
parent: `projects/${projectId}`,
filter: `
protoPayload.@type="type.googleapis.com/google.cloud.audit.AuditLog"
AND timestamp >= "2025-01-01T00:00:00Z"
AND timestamp < "2025-02-01T00:00:00Z"
`,
destination: `gs://${projectId}-audit-logs/2025/01/`,
outputFormat: 'JSON'
});

// Create log view for specific team
const logView = await mcpClient.logging.createView({
parent: `projects/${projectId}/locations/global/buckets/_Default`,
viewId: 'frontend_team_view',
view: {
name: 'Frontend Team Logs',
description: 'Logs from frontend services only',
filter: `
resource.type="k8s_container"
AND resource.labels.namespace_name="frontend"
`
}
});

// Correlate logs with traces
const correlatedLogs = await mcpClient.logging.listLogEntries({
resourceNames: [`projects/${projectId}`],
filter: `trace="projects/${projectId}/traces/abc123"`,
orderBy: 'timestamp asc'
});

console.log(`Found ${correlatedLogs.entries.length} logs for trace abc123`);

// Batch write for efficiency
const batchEntries = [];
for (let i = 0; i < 1000; i++) {
batchEntries.push({
severity: 'INFO',
jsonPayload: {
message: `Batch entry ${i}`,
batch_id: 'batch-001',
index: i
},
timestamp: new Date().toISOString()
});
}

await mcpClient.logging.writeLogEntries({
logName: `projects/${projectId}/logs/batch-processing`,
entries: batchEntries,
partialSuccess: true
});

// Monitor sink health
const sinkMetrics = await mcpClient.logging.getSinkMetrics({
sinkName: `projects/${projectId}/sinks/bigquery_analytics_sink`,
timeRange: {
startTime: new Date(Date.now() - 3600000).toISOString(),
endTime: new Date().toISOString()
}
});

console.log(`Sink exported ${sinkMetrics.exportedBytes} bytes`);
console.log(`Errors: ${sinkMetrics.errors}`);

Common Use Cases

Application Logging

  • Error tracking
  • Performance monitoring
  • User activity logs
  • Debugging issues

Security Monitoring

  • Audit trail analysis
  • Access monitoring
  • Threat detection
  • Compliance logging

Operations

  • System monitoring
  • Resource tracking
  • Cost analysis
  • Capacity planning

Analytics

  • User behavior analysis
  • Business metrics
  • Performance trends
  • Custom dashboards

Best Practices

  1. Log Structure:

    • Use structured JSON logs
    • Include correlation IDs
    • Add meaningful labels
    • Follow severity guidelines
  2. Cost Management:

    • Set retention policies
    • Use exclusions wisely
    • Monitor log volume
    • Archive old logs
  3. Performance:

    • Batch write operations
    • Use appropriate severity
    • Filter at source
    • Optimize queries

Troubleshooting

Common Issues

  1. Permission Errors

    • Verify service account roles
    • Check resource permissions
    • Review organization policies
    • Confirm API enablement
  2. Query Performance

    • Use time ranges
    • Add resource filters
    • Optimize filter syntax
    • Use appropriate indexes
  3. Export Issues

    • Check sink permissions
    • Verify destination access
    • Review filter syntax
    • Monitor export metrics

Getting Help