Forward Client-Side Logs to Splunk

Somebody who likes to code
Client-side applications, such as React SPAs, generate valuable logs on their side: UI errors, unexpected flows, browser-specific issues, performance signals, and metrics about user interactions. However, centralizing these logs in enterprise monitoring systems like Splunk presents unique challenges:
Security constraints: Client-side applications typically cannot send logs directly to Splunk due to security policies, credential exposure risks, and CORS restrictions.
Format compatibility: When our backend uses Serilog with Serilog.Sinks.Splunk, maintaining a consistent log format across frontend and backend becomes critical for unified querying and alerting.
Network efficiency: Batching, retry logic, and error handling require careful implementation to avoid losing logs or impacting application performance.
This article presents a practical solution: a TypeScript logging library that mirrors Serilog's format, combined with a .NET proxy that forwards logs to Splunk. We'll examine the implementation from the ui-logger-to-proxy repository, explaining both the architectural decisions and the technical details.
The Architecture
The solution consists of three components:
Client-side logger (TypeScript): Captures logs in Serilog-compatible format.
.NET proxy API: Receives logs from clients and forwards them to Splunk.
Splunk: The final destination for centralized log storage and analysis.
Understanding the Serilog Log Format
Before implementing the client-side logger, we need to understand the target format. Serilog.Sinks.Splunk produces JSON logs with this structure:
{
"Level": "Information",
"RenderedMessage": "User logged in successfully",
"MessageTemplate": "User {UserName} logged in successfully",
"Properties": {
"UserName": "johndoe@gmail.com",
"SessionId": "abc123"
},
"ReleaseVersion": "10.0.0",
"Timestamp": "2024-11-14T10:30:00.000Z"
}
Key fields explained:
Level: Log severity (Verbose, Debug, Information, Warning, Error, Fatal).
MessageTemplate: The template string with placeholders (e.g.,
{UserName}).RenderedMessage: The final message with placeholders replaced.
Properties: Structured data extracted from the message template or added as a context.
And Other Custom Fields.
The distinction between MessageTemplate and RenderedMessage is crucial. Splunk can index and query based on the template pattern, enabling queries like "show all login failures" regardless of the specific username.
Client-Side Implementation
The TypeScript BatchLogger class provides the foundation for structured logging with Serilog compatibility:
type LogLevel =
| "Verbose"
| "Debug"
| "Information"
| "Warning"
| "Error"
| "Fatal";
type LogEntry = {
time: number;
host: string;
source: string;
sourcetype: string;
index: string;
event: {
Level: LogLevel;
RenderedMessage: string;
MessageTemplate: string;
Properties: Record<string, any>;
Exception?: string;
[key: string]: any;
};
};
This structure mirrors Serilog's internal format, ensuring seamless integration with existing Splunk configurations.
Configuration
The BatchLogger constructor accepts a comprehensive configuration object that controls both behavior and performance characteristics:
type LoggerConfig = {
source: string;
sourcetype: string;
index: string;
host: string;
endpoint: string;
batchSize: number;
flushInterval: number;
maxRetries: number;
minimumLogLevel?: LogLevel;
enrichment?: Record<string, any>;
};
Configuration Parameters
Splunk Metadata:
source: Identifies the application generating logs (e.g., "my-app").sourcetype: Categorizes the log format (e.g., "ui", "json").index: Specifies the Splunk index for storage (e.g., "my-index").host: Logical hostname for the logs (e.g., "web-client").
Network Configuration:
endpoint: URL of the .NET proxy server (e.g., "http://localhost:5244/collector").
Performance Tuning:
batchSize: Number of log entries per batch (default: 10).flushInterval: Maximum time to hold logs before sending in milliseconds (default: 5000).maxRetries: Number of retry attempts for failed requests (default: 3).
Log Level Filtering:
minimumLogLevel: Minimum log level to process (default: "Information").Only logs at or above this level will be processed and sent.
Hierarchy: Verbose(0) < Debug(1) < Information(2) < Warning(3) < Error(4) < Fatal(5).
Global Enrichment:
enrichment: Key-value pairs added to all log entries (e.g., version, environment).
Considerations
Batch Size:
Small batches (5-10): Better for real-time monitoring, higher network overhead.
Large batches (20-50): More efficient network usage, potential memory pressure.
Very large batches (100+): Risk of losing many logs on failures.
Flush Interval:
Short intervals (1-3 seconds): Near real-time delivery, more network requests.
Medium intervals (5-10 seconds): Balanced performance and timeliness.
Long intervals (30+ seconds): Risk of log loss on page navigation.
Retry Strategy:
Few retries (1-2): Fast failure detection, potential log loss.
Moderate retries (3-5): Good balance for temporary network issues.
Many retries (10+): Risk of blocking the logging queue.
Log Level Strategy:
Verbose/Debug: Development and troubleshooting scenarios only.
Information: General application flow and user actions (production default).
Warning: Potentially problematic situations that don't break functionality.
Error/Fatal: Production environments focusing on actionable issues.
Example
export const logger = new BatchLogger({
source: "my-app",
sourcetype: "ui",
index: "my-index",
host: "127.0.0.1",
endpoint: "http://localhost:5244/collector",
batchSize: 10,
flushInterval: 5000,
maxRetries: 3,
minimumLogLevel: "Information",
enrichment: {
ReleaseVersion: "10.0.0",
Environment: "Development"
},
});
Template Rendering Engine
The logger implements a simple but effective template rendering system:
private renderMessage(template: string, properties: Record<string, any>): string {
let message = template;
for (const [key, value] of Object.entries(properties)) {
const placeholder = `{${key}}`;
message = message.replace(placeholder, String(value));
}
return message;
}
This approach maintains compatibility with Serilog's message template format, allowing developers to write familiar logging statements:
logger.information("Processed {Count} items in {Duration}ms", {
Count: 150,
Duration: 2340,
});
Log Level Filtering
The logger implements efficient log level filtering to reduce noise and improve performance:
const LogLevelValues: Record<LogLevel, number> = {
Verbose: 0,
Debug: 1,
Information: 2,
Warning: 3,
Error: 4,
Fatal: 5,
};
private isEnabled(level: LogLevel): boolean {
const minimumLogLevel = this.config.minimumLogLevel || 'Information';
return LogLevelValues[level] >= LogLevelValues[minimumLogLevel];
}
private log(level: LogLevel, messageTemplate: string, properties?: Record<string, any>, error?: Error): void {
// Early exit if log level is below minimum threshold
if (!this.isEnabled(level)) return;
// ... continue with log processing
}
Key Benefits:
Performance optimization: Prevents unnecessary object creation and processing for filtered logs.
Centralized filtering: Single point of control for all log level decisions.
Early exit: Returns immediately without any overhead for filtered logs.
Batching Strategy
Performance optimization is achieved through intelligent batching:
private log(
level: LogLevel,
messageTemplate: string,
properties?: Record<string, any>,
error?: Error
): void {
if (!this.isEnabled(level)) return;
const props = properties || {};
const renderedMessage = this.renderMessage(messageTemplate, props);
const mergedProperties = {
...this.contextProperties,
...props,
};
const logEntry: LogEntry = {
time: Date.now(),
host: this.config.host,
source: this.config.source,
sourcetype: this.config.sourcetype,
index: this.config.index,
event: {
Level: level,
RenderedMessage: renderedMessage,
MessageTemplate: messageTemplate,
Properties: mergedProperties,
...this.config.enrichment,
},
};
if (error) {
logEntry.event.Exception = error.stack;
}
this.logQueue.push(logEntry);
if (this.logQueue.length >= this.config.batchSize) {
this.flush();
} else if (!this.flushTimer) {
this.flushTimer = window.setTimeout(
() => this.flush(),
this.config.flushInterval
);
}
}
The batching mechanism:
Size-based flushing: Triggers when batch size is reached.
Time-based flushing: Ensures logs aren't held indefinitely.
Event-based flushing: Flushes on page unload and visibility changes.
Reliability Features
The logger includes robust error handling and retry logic:
public async flush(): Promise<void> {
if (this.logQueue.length === 0 || this.isFlushing) return;
this.isFlushing = true;
const logsToSend = [...this.logQueue];
this.logQueue = [];
if (this.flushTimer) {
clearTimeout(this.flushTimer);
this.flushTimer = null;
}
let retries = 0;
let success = false;
while (retries < this.config.maxRetries && !success) {
try {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 10000);
const response = await fetch(this.config.endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(logsToSend),
signal: controller.signal,
});
clearTimeout(timeoutId);
if (response.ok) {
success = true;
} else {
const shouldRetry = this.shouldRetryError(response.status);
if (!shouldRetry) {
console.error(
`Non-retryable error ${response.status}: ${response.statusText}`
);
break;
}
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
} catch (error: any) {
retries++;
if (error.name === 'AbortError') {
console.error(
`Request timeout (attempt ${retries}/${this.config.maxRetries})`
);
} else {
console.error(
`Failed to send logs (attempt ${retries}/${this.config.maxRetries}):`,
error
);
}
if (retries < this.config.maxRetries) {
const delay = Math.pow(2, retries) * 1000;
const jitter = Math.random() * 1000;
await new Promise(resolve => setTimeout(resolve, delay + jitter));
}
}
}
if (!success) {
console.warn('Failed to send logs after all retries. Logs:', logsToSend);
}
this.isFlushing = false;
}
Key reliability features:
Exponential backoff: Prevents overwhelming the server during failures.
Jitter: Reduces thundering herd problems.
Request timeouts: Prevents hanging requests.
Selective retry: Avoids retrying non-recoverable errors.
Page Lifecycle Management
Critical for single-page applications, the logger handles browser lifecycle events:
constructor(config: LoggerConfig) {
const defaults = {
batchSize: 10,
flushInterval: 5000,
maxRetries: 3,
};
this.config = {
...defaults,
...config,
};
this.handleBeforeUnload = () => {
this.flushSync();
};
this.handleVisibilityChange = () => {
this.onVisibilityChange();
};
window.addEventListener('beforeunload', this.handleBeforeUnload);
document.addEventListener('visibilitychange', this.handleVisibilityChange);
}
private onVisibilityChange(): void {
if (document.hidden) {
this.flush();
}
}
public flushSync(): void {
if (this.logQueue.length === 0) return;
const logsToSend = [...this.logQueue];
this.logQueue = [];
if (this.flushTimer) {
clearTimeout(this.flushTimer);
this.flushTimer = null;
}
const blob = new Blob([JSON.stringify(logsToSend)], {
type: 'application/json',
});
navigator.sendBeacon(this.config.endpoint, blob);
}
The sendBeacon API ensures log delivery even when users navigate away from the page, providing better log coverage for user journeys.
Server-Side Implementation
Proxy Endpoint
The .NET server provides a minimal proxy with CORS support:
app.MapPost("/collector", (LogEntry[] logEntries) =>
{
if (logEntries != null && logEntries.Length > 0)
{
foreach (var logEntry in logEntries)
{
var individualJson = JsonSerializer.Serialize(logEntry, jsonOptions);
Log.Logger.ForwardToSplunk(individualJson);
}
}
return Results.Ok(new { timestamp = DateTime.UtcNow });
});
This approach:
Accepts batched logs: Reduces HTTP overhead.
Maintains structure: Preserves client-generated log format.
Custom Splunk Formatter
The key innovation is the RawJsonFormatter class:
using Serilog.Events;
using Serilog.Formatting;
public class RawJsonFormatter : ITextFormatter
{
public void Format(Serilog.Events.LogEvent logEvent, TextWriter output)
{
if (logEvent.Properties.TryGetValue("RawJson", out var rawJsonProperty) &&
rawJsonProperty is ScalarValue scalarValue &&
scalarValue.Value is string rawJson)
{
output.Write(rawJson);
}
else
{
throw new NotSupportedException("RawJsonFormatter only supports log events with RawJson property");
}
}
}
This formatter bypasses Serilog's standard JSON serialization, allowing client-generated JSON to pass through unchanged to Splunk. This preserves the exact structure and field names required for compatibility.
Serilog Extension
The extension method simplifies the forwarding process:
public static class SplunkJsonLoggerExtensions
{
public static void ForwardToSplunk(this ILogger logger, string rawJson)
{
logger.Information("Raw JSON data received from client {@RawJson}", rawJson);
}
}
By using Serilog's structured logging with the @ operator, the raw JSON becomes a property that the custom formatter can extract. The @ ensures that:
No double-encoding: The JSON string is not serialized again.
Preserved structure: The raw JSON maintains its exact format.
Direct access: The formatter can extract the string property directly.
Configuration
The Serilog configuration demonstrates the complete pipeline:
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Information()
.Enrich.FromLogContext()
.WriteTo.Console()
.WriteTo.EventCollector(
"<SPLUNK_HOST>",// The Splunk host that is configured with an Event Collector
"<EVENT_COLLECTOR_TOKEN>", //The token provided to authenticate to the Splunk Event Collector
new RawJsonFormatter(), //The text formatter used to render log events into a JSON format
"services/collector/event", //Splunk Event Collector uri
LogEventLevel.Information, //The minimum log event level required in order to write an event to the sink.
2, // The interval in seconds that the queue should be instpected for batching
100, // The size of the batch
1000 // Maximum number of events in the queue
)
.CreateLogger();
Running Locally
This section provides a complete step-by-step guide to running the logging solution on our local development environment.
Prerequisites
Ensure we have the following tools installed:
Node.js 24+: Required for the React client application.
.NET 9 SDK or later: Required for the ASP.NET Core proxy server.
Docker and Docker Compose: Required for running Splunk locally.
Git: For cloning the repository.
Clone the Repository
git clone https://github.com/raulnq/ui-logger-to-proxy.git
cd ui-logger-to-proxy
Start Splunk Container
Start the Splunk container using Docker Compose:
docker-compose up -d splunk
Wait for Splunk to initialize (this typically takes 3-5 minutes). We can monitor the startup progress:
docker-compose logs -f splunk
Look for the message indicating Splunk has started successfully.
Configure Splunk Index
Access Splunk Web UI: Navigate to http://localhost:8000
Login credentials:
Username:
adminPassword:
splunk123456.
Create Index:
Go to Settings > Indexes
Click New Index
Index Name:
my-indexClick Save
Start the .NET Proxy Server
Open a new terminal window and start the server:
cd server
dotnet restore
dotnet run
The server will start on http://localhost:5244. We should see output similar to:
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5244
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
Start the React Client
Open another terminal window and start the client:
cd client
npm install
npm run dev
The client will start on http://localhost:5173. We should see:
VITE v4.x.x ready in xxx ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
Test the Complete Pipeline
Open the React app: Navigate to http://localhost:5173
Generate test logs:
Enter a message in the text area
Click "Send Log Message"
Verify server reception: Check the .NET server console for log entries
Verify Splunk indexing:
Go to Splunk Web UI (http://localhost:8000)
Navigate to Search & Reporting
Search query:
index="my-index"We should see the log entries from the client.
Thanks and happy coding.




