
What to Log and How to Monitor in Cross-Platform Apps

Overview
Effective logging and monitoring serve as the nervous system of mobile applications, providing critical visibility into runtime behavior that development teams would otherwise lack. This visibility becomes even more crucial in cross-platform environments, where framework abstractions can obscure underlying platform behaviors and introduce unique failure modes. Despite its importance, logging is often implemented as an afterthought, resulting in either information overload with logs that consume resources but provide little actionable insight, or information deserts where critical events go unrecorded. This challenge is particularly acute in cross-platform frameworks like Flutter, React Native, and .NET MAUI, where the abstraction layers can complicate straightforward monitoring approaches.
The real-world implications of poor logging practices extend far beyond developer convenience. Without proper monitoring, teams struggle to reproduce user-reported issues, diagnose subtle performance problems, and prioritize optimization efforts effectively. The resulting "development in the dark" leads to longer resolution times, increased support costs, and ultimately frustrated users who experience problems the team can't efficiently diagnose or fix. This article explores practical logging and monitoring strategies for cross-platform mobile applications, focusing on approaches that balance framework-specific capabilities with universal logging principles. By implementing these techniques, development teams can build a monitoring infrastructure that provides actionable insights while avoiding the common pitfalls of resource-intensive or privacy-compromising logging implementations.
Logging Fundamentals
Before diving into framework-specific implementations, it's essential to establish core principles that apply across all platforms and frameworks. These fundamentals form the foundation for effective monitoring regardless of the technology stack.
What to Log (and What Not to Log)
The most valuable logs capture specific events that provide context for understanding application behavior:
User interactions: Log meaningful user actions that trigger state changes or server requests, capturing both the action type and relevant context, but without including sensitive input data.
State transitions: Record significant application state changes, particularly those that affect user experience or backend communication.
Network operations: Log the start, completion, and failure of network requests with appropriate metadata like endpoint, status codes, and timing information.
Resource usage thresholds: Capture events when the application approaches or exceeds resource limits that might impact performance or stability.
Initialization and configuration: Record application startup parameters, device information, and configuration details that affect operation.
Equally important is identifying what not to log. Avoid capturing:
Personally identifiable information (PII): Never log user credentials, payment details, personal contact information, or other sensitive data that could create security or compliance risks.
High-frequency events: Avoid logging routine operations that occur frequently without providing meaningful context, such as animation frames, routine UI updates, or standard lifecycle events.
Verbose internal state: Resist the temptation to dump entire data structures or detailed internal state unless absolutely necessary for debugging specific issues.
Information available through other channels: Don't duplicate telemetry that's already captured through dedicated crash reporting or analytics services.
Log Levels and Their Proper Use
Effective logging frameworks typically provide multiple severity levels, each with specific use cases:
Trace/Verbose: Extremely detailed information used only during focused debugging sessions. Generally disabled in production.
Debug: Detailed information useful for development and troubleshooting. Typically disabled in production unless investigating specific issues.
Info: General information highlighting application flow and significant successful operations. Safe for limited production use.
Warning: Potential issues that don't prevent functionality but may indicate problems or suboptimal conditions. Important in production.
Error: Failures of specific operations that don't crash the application but prevent features from working correctly. Critical in production.
Critical/Fatal: Severe failures that prevent core application functionality or lead to crashes. Essential in production.
Proper level assignment is crucial—logs that are too verbose become noise that obscures important signals, while logs that are too sparse miss critical context. A good rule of thumb is that production logs should primarily consist of Info, Warning, Error, and Critical entries, with Debug logs enabled selectively when investigating specific issues.
Structuring Logs for Analysis
The format of log entries significantly impacts their usefulness for analysis and troubleshooting:
- Consistent structure: Use a consistent format across all logs to facilitate automated parsing and analysis:
[Timestamp] [Level] [Component] [SessionID] Message {Context}
- Contextual metadata: Include relevant context with each log entry, such as session identifiers, screen/component names, and operation identifiers:
[2025-04-24T10:15:32Z] [ERROR] [PaymentProcessor] [sess_abc123] Failed to process payment {orderId: "ord_789", gatewayResponse: "declined", reason: "insufficient_funds"}
- Correlation identifiers: Use trace IDs or correlation tokens to link related events across different components and services:
[2025-04-24T10:15:30Z] [INFO] [ApiClient] [sess_abc123] [trace_xyz789] Sending payment request
[2025-04-24T10:15:32Z] [ERROR] [ApiClient] [sess_abc123] [trace_xyz789] Request failed with status 400
- JSON or structured format: For complex entries, use structured formats like JSON that preserve hierarchical relationships and are easily parsed:
{
"timestamp": "2025-04-24T10:15:32Z",
"level": "ERROR",
"component": "PaymentProcessor",
"sessionId": "sess_abc123",
"message": "Failed to process payment",
"context": {
"orderId": "ord_789",
"gatewayResponse": "declined",
"reason": "insufficient_funds",
"retryCount": 2
}
}
Privacy and Performance Considerations
Logging implementations must balance visibility with privacy and performance concerns:
Data minimization: Log only what's necessary for troubleshooting, avoiding excessive collection that creates privacy risks and consumes storage.
Redaction and anonymization: Implement automatic redaction of sensitive fields and identifiers before logs are transmitted or stored:
// Before redaction
[INFO] User profile updated {email: "user@example.com", preferences: {theme: "dark"}}
// After redaction
[INFO] User profile updated {email: "[REDACTED]", preferences: {theme: "dark"}}
User control: Provide mechanisms for users to opt out of detailed logging, particularly for diagnostics that might capture sensitive information.
Resource efficiency: Design logging implementations to minimize CPU, memory, and storage impact, especially on lower-end devices:
- Buffer logs in memory and batch transmissions
- Compress logs before transmission
- Implement log rotation and size limits
Framework-Specific Logging Implementations
Flutter Logging Architecture
Flutter applications can implement robust logging through a combination of Dart's built-in capabilities and specialized packages:
Core Implementation
The logging package provides a flexible foundation for structured logging:
import 'package:logging/logging.dart';
void setupLogging() {
// Configure hierarchical logging
Logger.root.level = Level.INFO;
Logger.root.onRecord.listen((record) {
// Format and output logs
print('${record.time}: ${record.level.name}: ${record.message}');
// In production, send to remote logging service
if (record.level >= Level.WARNING) {
logCollectionService.send(record);
}
});
}
// Create loggers for different components
final apiLogger = Logger('api');
final authLogger = Logger('auth');
// Usage
void makeApiCall() async {
apiLogger.info('Starting API call to /users');
try {
final response = await api.getUsers();
apiLogger.info('API call successful', {'count': response.users.length});
} catch (e, stackTrace) {
apiLogger.severe(
'API call failed',
{'error': e.toString(), 'url': '/users'},
stackTrace
);
}
}
For production applications, the fimber package offers better performance and more features:
import 'package:fimber/fimber.dart';
void setupLogging() {
// In debug mode, log to console
if (kDebugMode) {
Fimber.plantTree(DebugTree());
}
// In all environments, plant a custom tree for remote logging
Fimber.plantTree(FirebaseLogTree(
logLevels: ["W", "E"], // Only warnings and errors
bufferSize: 50, // Batch uploads
));
}
// Usage with structured data
void processPayment(Order order) {
Fimber.i("Processing payment", {"orderId": order.id});
try {
// Payment processing
} catch (e) {
Fimber.e("Payment failed",
{"orderId": order.id, "error": e.toString()});
}
}
Remote Logging Services
For production Flutter applications, several services offer robust remote logging:
- Firebase Crashlytics with custom logging:
import 'package:firebase_crashlytics/firebase_crashlytics.dart';
void logError(String message, dynamic error, StackTrace stackTrace) {
// Log to local console in development
if (kDebugMode) {
print(message);
print(error);
print(stackTrace);
return;
}
// Log to Crashlytics in production
FirebaseCrashlytics.instance.recordError(
error,
stackTrace,
reason: message,
fatal: false,
);
}
- Sentry for combined logging and crash reporting:
import 'package:sentry_flutter/sentry_flutter.dart';
Future<void> main() async {
await SentryFlutter.init(
(options) {
options.dsn = 'YOUR_SENTRY_DSN';
options.tracesSampleRate = 0.5;
},
appRunner: () => runApp(MyApp()),
);
}
// Logging with breadcrumbs
Future<void> fetchData() async {
Sentry.addBreadcrumb(
Breadcrumb(
category: 'api',
message: 'Fetching user data',
level: SentryLevel.info,
),
);
try {
// API call
} catch (e, stackTrace) {
await Sentry.captureException(e, stackTrace: stackTrace);
}
}
React Native Logging Solutions
React Native applications typically implement logging through a combination of JavaScript utilities and native module bridges:
Unified Logging Layer
Create a centralized logging service to handle different environments:
// logger.js
import Config from "react-native-config";
import { Platform } from "react-native";
// Configure remote logging service
import * as Sentry from "@sentry/react-native";
class Logger {
constructor() {
this.metadata = {
platform: Platform.OS,
appVersion: Config.APP_VERSION,
deviceId: null, // Set after permission check
};
this.initializeRemoteLogging();
}
setDeviceId(deviceId) {
this.metadata = { ...this.metadata, deviceId };
Sentry.setUser({ id: deviceId });
}
initializeRemoteLogging() {
if (!__DEV__) {
Sentry.init({
dsn: Config.SENTRY_DSN,
environment: Config.ENVIRONMENT,
tracesSampleRate: 0.2,
});
}
}
debug(message, data = {}) {
if (__DEV__) {
console.debug(`[DEBUG] ${message}`, data);
}
// Debug logs aren't sent to remote service
}
info(message, data = {}) {
const logData = { ...this.metadata, ...data };
if (__DEV__) {
console.info(`[INFO] ${message}`, logData);
}
if (!__DEV__) {
Sentry.addBreadcrumb({
category: "info",
message,
data: logData,
level: Sentry.Severity.Info,
});
}
}
warn(message, data = {}) {
const logData = { ...this.metadata, ...data };
if (__DEV__) {
console.warn(`[WARN] ${message}`, logData);
}
if (!__DEV__) {
Sentry.addBreadcrumb({
category: "warning",
message,
data: logData,
level: Sentry.Severity.Warning,
});
}
}
error(message, error, data = {}) {
const logData = { ...this.metadata, ...data };
if (__DEV__) {
console.error(`[ERROR] ${message}`, error, logData);
}
if (!__DEV__) {
Sentry.captureException(error, {
tags: logData,
extra: { message },
});
}
}
}
export default new Logger();
Network and Redux Logging
For React Native applications using Redux, middleware can provide valuable logging of state changes:
// reduxLogger.js
import logger from "./logger";
export const loggerMiddleware = (store) => (next) => (action) => {
logger.info(`Dispatching action: ${action.type}`, {
payload: action.payload,
});
const result = next(action);
logger.debug("Next state:", {
state: store.getState(),
});
return result;
};
Network request logging can be implemented with Axios interceptors:
// apiClient.js
import axios from "axios";
import logger from "./logger";
const apiClient = axios.create({
baseURL: Config.API_URL,
timeout: 10000,
});
// Request interceptor
apiClient.interceptors.request.use(
(config) => {
const requestId = generateUniqueId();
config.metadata = {
requestId,
startTime: new Date().getTime(),
};
logger.info(`API Request: ${config.method.toUpperCase()} ${config.url}`, {
requestId,
headers: config.headers,
params: config.params,
data: config.data,
});
return config;
},
(error) => {
logger.error("API Request Error", error);
return Promise.reject(error);
}
);
// Response interceptor
apiClient.interceptors.response.use(
(response) => {
const duration = new Date().getTime() - response.config.metadata.startTime;
logger.info(
`API Response: ${response.config.method.toUpperCase()} ${
response.config.url
}`,
{
requestId: response.config.metadata.requestId,
status: response.status,
duration,
dataSize: JSON.stringify(response.data).length,
}
);
return response;
},
(error) => {
const request = error.config;
if (request && request.metadata) {
const duration = new Date().getTime() - request.metadata.startTime;
logger.error(
`API Error: ${request.method.toUpperCase()} ${request.url}`,
error,
{
requestId: request.metadata.requestId,
status: error.response?.status,
duration,
response: error.response?.data,
}
);
} else {
logger.error("API Error", error);
}
return Promise.reject(error);
}
);
export default apiClient;
.NET MAUI Logging Patterns
.NET MAUI applications benefit from the robust logging infrastructure available in the .NET ecosystem, particularly with the Microsoft.Extensions.Logging framework:
Structured Logging Setup
Set up centralized logging in the MAUI application:
// MauiProgram.cs
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold");
});
// Add logging
builder.Logging.AddDebug();
// In production, add Application Insights or other providers
if (!builder.Services.HostingEnvironment.IsDevelopment())
{
builder.Logging.AddApplicationInsights(
configureTelemetryConfiguration: (config) =>
config.ConnectionString = "YOUR_CONNECTION_STRING",
configureApplicationInsightsLoggerOptions: (options) => { }
);
}
// Register services with DI container
builder.Services.AddSingleton<IConnectivity>(Connectivity.Current);
builder.Services.AddSingleton<IGeolocation>(Geolocation.Default);
builder.Services.AddSingleton<IMap>(Map.Default);
builder.Services.AddSingleton<IApiService, ApiService>();
builder.Services.AddSingleton<IUserRepository, UserRepository>();
return builder.Build();
}
Logging Within Services
Implement logging in service classes using dependency injection:
public class ApiService : IApiService
{
private readonly HttpClient _httpClient;
private readonly ILogger<ApiService> _logger;
private readonly string _apiBaseUrl;
public ApiService(ILogger<ApiService> logger, IConnectivity connectivity)
{
_logger = logger;
_httpClient = new HttpClient();
_apiBaseUrl = DeviceInfo.Platform == DevicePlatform.Android
? "https://api.example.com/v1"
: "https://api.example.com/v1";
_logger.LogInformation("ApiService initialized with base URL: {BaseUrl}", _apiBaseUrl);
}
public async Task<ApiResponse<T>> GetAsync<T>(string endpoint, CancellationToken cancellationToken = default)
{
var url = $"{_apiBaseUrl}/{endpoint}";
var requestId = Guid.NewGuid().ToString();
using var logScope = _logger.BeginScope(new Dictionary<string, object>
{
["RequestId"] = requestId,
["Endpoint"] = endpoint
});
_logger.LogInformation("Starting API request GET {Url}", url);
try
{
var stopwatch = Stopwatch.StartNew();
var response = await _httpClient.GetAsync(url, cancellationToken);
stopwatch.Stop();
_logger.LogInformation(
"Completed API request GET {Url} with status {StatusCode} in {ElapsedMs}ms",
url,
(int)response.StatusCode,
stopwatch.ElapsedMilliseconds);
if (!response.IsSuccessStatusCode)
{
var errorContent = await response.Content.ReadAsStringAsync(cancellationToken);
_logger.LogWarning(
"API request failed GET {Url} with status {StatusCode}. Response: {ErrorContent}",
url,
(int)response.StatusCode,
errorContent);
return new ApiResponse<T>
{
Success = false,
StatusCode = (int)response.StatusCode,
ErrorMessage = $"Request failed with status {response.StatusCode}"
};
}
var content = await response.Content.ReadAsStringAsync(cancellationToken);
var data = JsonSerializer.Deserialize<T>(content);
return new ApiResponse<T>
{
Success = true,
StatusCode = (int)response.StatusCode,
Data = data
};
}
catch (Exception ex) when (ex is HttpRequestException || ex is TaskCanceledException)
{
_logger.LogError(
ex,
"Network error during API request GET {Url}",
url);
return new ApiResponse<T>
{
Success = false,
StatusCode = 0,
ErrorMessage = "Network error: " + ex.Message
};
}
catch (Exception ex)
{
_logger.LogError(
ex,
"Unexpected error during API request GET {Url}",
url);
return new ApiResponse<T>
{
Success = false,
StatusCode = 500,
ErrorMessage = "Unexpected error: " + ex.Message
};
}
}
}
Crash Reporting
Crash reporting provides critical visibility into fatal errors that end user sessions. Effective crash reporting requires both client-side implementation and server-side analysis capabilities.
Cross-Framework Crash Reporting Fundamentals
Regardless of the framework, effective crash reporting requires:
Complete stack traces: Capturing the full execution path that led to the crash, including method names, line numbers, and file names.
Environment context: Recording device information, OS version, app version, and other environmental factors that might influence crash reproduction.
User journey: Tracking the sequence of screens or actions that preceded the crash to aid in reproduction.
State information: Capturing relevant application state without including sensitive data.
Deduplication and grouping: Aggregating similar crashes to identify high-impact issues and track crash rates across releases.
Framework-Specific Crash Reporting
Flutter Crash Reporting
Flutter applications can implement crash reporting with Firebase Crashlytics or Sentry:
import 'package:firebase_core/firebase_core.dart';
import 'package:firebase_crashlytics/firebase_crashlytics.dart';
import 'package:flutter/foundation.dart';
Future<void> main() async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp();
// Pass all uncaught errors to Crashlytics
FlutterError.onError = FirebaseCrashlytics.instance.recordFlutterFatalError;
// Handle errors from the framework
PlatformDispatcher.instance.onError = (error, stack) {
FirebaseCrashlytics.instance.recordError(error, stack, fatal: true);
return true;
};
// Set user identifier once authenticated
FirebaseCrashlytics.instance.setUserIdentifier('user123');
// Add custom keys for context
FirebaseCrashlytics.instance.setCustomKey('last_feature_used', 'payment_flow');
runApp(MyApp());
}
React Native Crash Reporting
React Native applications can use Crashlytics through the @react-native-firebase/crashlytics package or Sentry:
import crashlytics from "@react-native-firebase/crashlytics";
import { Alert, Button } from "react-native";
// Set up error boundaries at the application level
class ErrorBoundary extends React.Component {
state = { hasError: false };
static getDerivedStateFromError() {
return { hasError: true };
}
componentDidCatch(error, errorInfo) {
// Log to crashlytics
crashlytics().recordError(error, errorInfo.componentStack);
}
render() {
if (this.state.hasError) {
return (
<View style={styles.errorContainer}>
<Text style={styles.errorText}>Something went wrong</Text>
<Button title="Restart App" onPress={() => RNRestart.Restart()} />
</View>
);
}
return this.props.children;
}
}
// Set up global error handler
const setupErrorHandling = () => {
// Handle JS errors
const originalErrorHandler = ErrorUtils.getGlobalHandler();
ErrorUtils.setGlobalHandler((error, isFatal) => {
// Log to crashlytics
crashlytics().recordError(error);
// Alert the user if fatal
if (isFatal) {
Alert.alert(
"Unexpected Error",
"The application encountered an unexpected error. Please restart the application.",
[
{
text: "Restart",
onPress: () => RNRestart.Restart(),
},
]
);
}
// Call original handler
originalErrorHandler(error, isFatal);
});
};
.NET MAUI Crash Reporting
.NET MAUI applications can use AppCenter or Application Insights for crash reporting:
// MauiProgram.cs
using Microsoft.AppCenter;
using Microsoft.AppCenter.Analytics;
using Microsoft.AppCenter.Crashes;
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
// Configure crash reporting
AppCenter.Start(
"ios=YOUR_IOS_APP_SECRET;android=YOUR_ANDROID_APP_SECRET",
typeof(Analytics),
typeof(Crashes)
);
// Other configuration
// ...
return builder.Build();
}
// In App.xaml.cs
protected override void OnStart()
{
// Set user identity after authentication
Crashes.SetUserId("user@example.com");
// Track app launch
Analytics.TrackEvent("AppStarted");
// Handle unhandled exceptions
AppDomain.CurrentDomain.UnhandledException += (sender, args) =>
{
var exception = args.ExceptionObject as Exception;
if (exception != null)
{
Crashes.TrackError(exception, new Dictionary<string, string>
{
{ "CurrentScreen", _navigationService.CurrentPage },
{ "UserLoggedIn", _authService.IsAuthenticated.ToString() }
});
}
};
}
Performance Monitoring
Beyond crash reporting, monitoring application performance provides crucial insights into user experience and potential issues before they become critical failures.
Key Performance Metrics
Regardless of framework, several key metrics should be tracked:
App Start Time: Measure cold start and warm start durations to identify initialization bottlenecks.
Screen Load Time: Track how long each screen takes to become interactive after navigation.
Frame Rate: Monitor UI thread performance, particularly during animations and scrolling.
Network Performance: Measure API call latency, success rates, and payload sizes.
Memory Usage: Track memory consumption patterns to identify potential leaks or excessive usage.
Framework-Specific Performance Monitoring
Flutter Performance Tracking
Flutter applications can leverage Firebase Performance Monitoring or custom Dart implementations:
import 'package:firebase_performance/firebase_performance.dart';
class PerformanceMonitoringHttpClient extends BaseClient {
final Client _inner;
final FirebasePerformance _performance;
PerformanceMonitoringHttpClient(this._inner)
: _performance = FirebasePerformance.instance;
@override
Future<StreamedResponse> send(BaseRequest request) async {
// Create a trace for the HTTP request
final metric = _performance.newHttpMetric(
request.url.toString(),
HttpMethod.values.firstWhere(
(method) => method.toString() == 'HttpMethod.${request.method}',
orElse: () => HttpMethod.Get,
),
);
// Start the trace
await metric.start();
StreamedResponse response;
try {
response = await _inner.send(request);
// Set response information
metric.responsePayloadSize = response.contentLength ?? 0;
metric.responseContentType = response.headers['content-type'];
metric.httpResponseCode = response.statusCode;
return response;
} finally {
// Stop the trace
await metric.stop();
}
}
}
// Custom screen timing
class PerformanceTracker {
final Map<String, Trace> _activeTraces = {};
Future<void> startScreenTrace(String screenName) async {
if (_activeTraces.containsKey(screenName)) {
await _activeTraces[screenName]!.stop();
}
final trace = FirebasePerformance.instance.newTrace('screen_$screenName');
await trace.start();
_activeTraces[screenName] = trace;
}
Future<void> stopScreenTrace(String screenName) async {
final trace = _activeTraces[screenName];
if (trace != null) {
await trace.stop();
_activeTraces.remove(screenName);
}
}
void recordMetric(String screenName, String metricName, int value) {
final trace = _activeTraces[screenName];
if (trace != null) {
trace.incrementMetric(metricName, value);
}
}
}
React Native Performance Monitoring
React Native can use Flipper during development and Performance Monitoring in production:
import perf from "@react-native-firebase/perf";
// Network monitoring with Firebase Performance
const monitorApiCalls = async () => {
// Monitor a specific URL
const httpMetric = perf().newHttpMetric(
"https://api.example.com/users",
"GET"
);
// Start the metric
await httpMetric.start();
try {
// Perform the API call
const response = await fetch("https://api.example.com/users");
const json = await response.json();
// Set values
httpMetric.setHttpResponseCode(response.status);
httpMetric.setResponseContentType(response.headers.get("Content-Type"));
httpMetric.setResponsePayloadSize(JSON.stringify(json).length);
return json;
} finally {
// Stop and record the metric
await httpMetric.stop();
}
};
// Screen performance tracking
class PerformanceTracker {
constructor() {
this.traces = {};
}
async startScreenTrace(screenName) {
// Stop existing trace if it exists
if (this.traces[screenName]) {
await this.stopScreenTrace(screenName);
}
// Create and start new trace
const trace = perf().newTrace(`screen_${screenName}`);
await trace.start();
this.traces[screenName] = trace;
}
async stopScreenTrace(screenName) {
const trace = this.traces[screenName];
if (trace) {
await trace.stop();
delete this.traces[screenName];
}
}
// React Navigation integration
setupNavigationTracking(navigation) {
// Track screen transitions
navigation.addListener("state", async (e) => {
const currentRouteName = this.getActiveRouteName(e.data.state);
if (currentRouteName) {
await this.startScreenTrace(currentRouteName);
}
});
}
// Helper to extract current screen from navigation state
getActiveRouteName(state) {
if (!state || !state.routes) return null;
const route = state.routes[state.index];
// Dive into nested navigators
if (route.state) {
return this.getActiveRouteName(route.state);
}
return route.name;
}
}
.NET MAUI Performance Monitoring
.NET MAUI applications can leverage Application Insights for comprehensive performance monitoring:
public class PerformanceService : IPerformanceService
{
private readonly TelemetryClient _telemetryClient;
private readonly Dictionary<string, IOperationHolder<RequestTelemetry>> _activeOperations = new();
public PerformanceService(TelemetryClient telemetryClient)
{
_telemetryClient = telemetryClient;
}
public void StartPageTracking(string pageName)
{
// Stop existing operation if it exists
StopPageTracking(pageName);
// Start new operation
var operation = _telemetryClient.StartOperation<RequestTelemetry>($"Page_{pageName}");
operation.Telemetry.Context.Properties["Type"] = "PageView";
_activeOperations[pageName] = operation;
}
public void StopPageTracking(string pageName)
{
if (_activeOperations.TryGetValue(pageName, out var operation))
{
_telemetryClient.StopOperation(operation);
_activeOperations.Remove(pageName);
}
}
public void TrackMetric(string name, double value, Dictionary<string, string> properties = null)
{
_telemetryClient.TrackMetric(name, value, properties);
}
public IDisposable TrackOperation(string operationName)
{
var operation = _telemetryClient.StartOperation<DependencyTelemetry>(operationName);
return new OperationTracker(() => _telemetryClient.StopOperation(operation));
}
private class OperationTracker : IDisposable
{
private readonly Action _onDispose;
private bool _disposed;
public OperationTracker(Action onDispose)
{
_onDispose = onDispose;
}
public void Dispose()
{
if (!_disposed)
{
_onDispose?.Invoke();
_disposed = true;
}
}
}
}
// Usage in view model
public class ProductsViewModel : BaseViewModel
{
private readonly IPerformanceService _performanceService;
private readonly IProductsService _productsService;
public ProductsViewModel(
IPerformanceService performanceService,
IProductsService productsService)
{
_performanceService = performanceService;
_productsService = productsService;
}
public async Task LoadProductsAsync()
{
IsBusy = true;
using (_performanceService.TrackOperation("LoadProducts"))
{
try
{
var stopwatch = Stopwatch.StartNew();
var products = await _productsService.GetProductsAsync();
stopwatch.Stop();
Products = new ObservableCollection<ProductViewModel>(
products.Select(p => new ProductViewModel(p))
);
_performanceService.TrackMetric(
"ProductLoadTime",
stopwatch.ElapsedMilliseconds,
new Dictionary<string, string>
{
["ProductCount"] = Products.Count.ToString()
}
);
}
catch (Exception ex)
{
// Handle and log error
}
finally
{
IsBusy = false;
}
}
}
}
Practical Monitoring Strategy
Creating an effective monitoring strategy requires balancing visibility with resource constraints and privacy considerations. Here's a practical approach to implementation:
Start with the essentials: Begin by implementing crash reporting and basic error logging before adding more detailed performance metrics.
Define clear objectives: Identify specific questions you need to answer through monitoring, such as "Why are users experiencing crashes at checkout?" or "Which API calls are creating performance bottlenecks?"
Implement progressive disclosure: Configure logging levels to increase detail only when investigating specific issues, rather than capturing everything all the time.
Automate analysis: Set up alerting for critical issues and regular reporting of key metrics to identify trends before they become problems.
Respect user privacy: Implement data minimization and anonymization techniques, particularly for logs that might contain sensitive information.
Monitor the monitors: Track the performance impact of your logging and monitoring implementations to ensure they don't degrade user experience.
Summary
Effective logging and monitoring form the foundation of reliable, high-performance mobile applications. By implementing appropriate logging levels, structured formats, and targeted performance metrics, development teams gain the visibility needed to identify and resolve issues proactively rather than reactively. The specific implementation details vary across Flutter, React Native, and .NET MAUI, but the core principles remain consistent: capture meaningful events without overwhelming resources, protect user privacy while maintaining diagnostics capabilities, and focus monitoring efforts on metrics that drive actionable insights.
As mobile applications continue to grow in complexity, particularly in cross-platform environments, robust monitoring becomes increasingly crucial for maintaining quality and user satisfaction. The approaches outlined in this article provide a starting point for implementing effective logging and monitoring in cross-platform mobile applications. By adapting these patterns to your specific project requirements and continuously refining your monitoring strategy based on real-world insights, you can build a feedback loop that supports ongoing improvement in application quality, performance, and reliability. Remember that the most valuable monitoring isn't the most comprehensive—it's the monitoring that helps you answer specific questions about application behavior and user experience, enabling data-driven decisions about where to focus optimization efforts for maximum impact.
Ready to streamline your internal app distribution?
Start sharing your app builds with your team and clients today.
No app store reviews, no waiting times.