MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Best Practices: React Logging and Error Handling

2025-12-19 00:43:02

React Logging and Error Handling Best Practices

In this guide, we'll explore battle-tested patterns for implementing production-ready logging and error handling in React applications using logzai-js library, but you can use any library.

Whether you're building a small side project or maintaining a large-scale application, these practices will help you catch bugs faster, understand user behavior better, and ship more reliable software.

Understanding the Foundation

Why Logging Matters in React

Client-side logging presents unique challenges compared to server-side logging:

  • Distributed Environments: Your app runs in thousands of different browsers, devices, and network conditions
  • Limited Visibility: Without proper instrumentation, you only see errors that get reported—most issues go unnoticed
  • User Context: Understanding what the user was doing when an error occurred is critical for debugging
  • Performance Impact: Excessive logging can slow down your app and frustrate users

The key is finding the right balance: log enough to debug effectively, but not so much that you impact performance or create noise.

The Cost of Poor Error Handling

When error handling is an afterthought, the consequences ripple through your entire organization:

  • Lost Revenue: Users who encounter errors often abandon their workflows—shopping carts left behind, forms never submitted
  • Support Burden: Your support team spends hours trying to reproduce issues with incomplete information
  • Development Time: Engineers waste days tracking down bugs that could have been caught in minutes with proper logging
  • User Trust: Frequent errors erode confidence in your product, making users less likely to return

What Makes Good Logging

Good logging isn't about quantity—it's about quality and structure:

  • Structured Data: Logs should be machine-readable with consistent fields (JSON, not strings)
  • Appropriate Levels: Use debug, info, warn, and error levels correctly
  • Rich Context: Include user IDs, session IDs, routes, and relevant state
  • Actionable Information: Every log should help you understand what happened and why
  • Performance-Aware: Logging should never block your UI thread

With these principles in mind, let's dive into implementation.

Setting Up LogzAI in Your React Application

Getting started with logzai-js is straightforward. First, install the package:

npm install logzai-js
# or
pnpm add logzai-js
# or
yarn add logzai-js

Next, initialize logzai in your application entry point (typically main.tsx or index.tsx):

import logzai from 'logzai-js'
import { browserPlugin } from 'logzai-js/browser'

// Initialize logzai before rendering your app
logzai.init({
  ingestToken: 'your-ingest-token',
  ingestEndpoint: 'https://ingest.logzai.com',
  serviceName: 'my-react-app',
  environment: process.env.NODE_ENV, // 'development' or 'production'
  mirrorToConsole: process.env.NODE_ENV === 'development', // See logs in console during dev
})

// Enable the browser plugin for automatic error handling
logzai.plugin('browser', browserPlugin)

What the Browser Plugin Does Automatically

The browser plugin is a powerful addition that transforms your error handling with zero additional code. Once enabled, it automatically:

  • Captures all JavaScript errors: Hooks into window.onerror to catch unhandled exceptions
  • Captures unhandled promise rejections: Monitors window.onunhandledrejection to catch async errors that slip through
  • Logs errors with full context: Automatically includes stack traces, error messages, and browser information
  • Non-blocking operation: Sends logs asynchronously without impacting your app's performance

This means even errors you didn't explicitly catch will be logged to logzai, giving you complete visibility into production issues.

Customizing the Browser Plugin

The browser plugin accepts configuration options to enhance your logs with application-specific context:

import logzai from 'logzai-js'
import { browserPlugin } from 'logzai-js/browser'
import { store } from './store'
import { selectCurrentUser, selectSelectedOrg } from './store/selectors'

// Initialize logzai
logzai.init({
  ingestToken: 'your-ingest-token',
  ingestEndpoint: 'https://ingest.logzai.com',
  serviceName: 'my-react-app',
  environment: process.env.NODE_ENV,
  mirrorToConsole: process.env.NODE_ENV === 'development',
})

// Configure browser plugin with custom options
logzai.plugin('browser', browserPlugin, {
  // Custom message formatter for errors
  messageFormatter: (error: any) => {
    return `Exception: ${error.message}`
  },

  // Context injector - automatically adds context to every log and error
  contextInjector: () => {
    const state = store.getState()
    const currentUser = selectCurrentUser(state)
    const selectedOrg = selectSelectedOrg(state)

    return {
      userId: currentUser?.id,
      userEmail: currentUser?.email,
      orgId: selectedOrg?.id,
      orgName: selectedOrg?.name,
      currentRoute: window.location.pathname,
      userAgent: navigator.userAgent,
      viewport: `${window.innerWidth}x${window.innerHeight}`,
    }
  },

  // Optional: Filter out errors you don't want to log
  errorFilter: (error: Error) => {
    // Return false to skip logging this error
    if (error.message.includes('ResizeObserver')) {
      return false // Don't log benign ResizeObserver errors
    }
    return true // Log all other errors
  },
})

Key Options:

  • messageFormatter: Transform how error messages appear in logs
  • contextInjector: Inject application state (user, org, route) into every log automatically
  • errorFilter: Skip logging specific errors that aren't actionable

With this setup, every error—even those you didn't anticipate—will be automatically logged with rich context about the user, their session, and the state of your application.

That's it! You're now ready to start logging with comprehensive automatic error tracking.

Logging Best Practices

1. Use Appropriate Log Levels

Understanding when to use each log level is crucial for creating a signal-to-noise ratio that actually helps you debug:

DEBUG: Use for detailed information useful during development. These logs are typically verbose and not needed in production.

// Good use of debug
const handleSearch = (query: string) => {
  logzai.debug('Search initiated', {
    query,
    timestamp: Date.now(),
    resultsCount: results.length,
  })
}

INFO: Use for important business events and normal operations you want to track.

// Good use of info
const handleCheckout = async (cartItems: CartItem[]) => {
  logzai.info('Checkout started', {
    itemCount: cartItems.length,
    totalValue: calculateTotal(cartItems),
    userId: currentUser.id,
  })

  // ... checkout logic
}

WARN: Use for recoverable issues or deprecated features that should be addressed but don't break functionality.

// Good use of warn
const fetchUserData = async () => {
  const cachedData = getFromCache('userData')

  if (!cachedData) {
    logzai.warn('Cache miss for user data', {
      userId: currentUser.id,
      cacheExpiry: getCacheExpiry(),
    })
    return fetchFromAPI()
  }

  return cachedData
}

ERROR: Use for unrecoverable errors that impact functionality.

// Good use of error
const saveUserSettings = async (settings: Settings) => {
  try {
    await api.updateSettings(settings)
  } catch (error) {
    logzai.error('Failed to save user settings', {
      error: error.message,
      userId: currentUser.id,
      settings,
    })
    throw error // Re-throw to let UI handle it
  }
}

2. Structure Your Logs with Context

Context transforms logs from cryptic messages into actionable insights. Compare these two approaches:

// ❌ Bad: Minimal context
logzai.info('User logged in')

// ✅ Good: Rich context
logzai.info('User logged in', {
  userId: user.id,
  email: user.email,
  loginMethod: 'oauth',
  provider: 'google',
  timestamp: Date.now(),
  previousLoginAt: user.lastLoginAt,
  daysSinceLastLogin: calculateDaysSince(user.lastLoginAt),
})

The second example gives you everything you need to understand user behavior patterns, detect anomalies, and debug issues.

Key context fields to include:

  • User identifiers: userId, email, sessionId
  • Business context: orderId, transactionId, itemId
  • Technical context: route, component name, action type
  • Temporal context: timestamps, durations, retry counts
  • Environmental context: browser, device, network status

3. Context Injection Pattern

As we saw in the setup section, the browser plugin's contextInjector automatically enriches all logs and errors with application context. This pattern ensures you never have to manually add context to individual log statements.

When you configure the browser plugin with a contextInjector, every log call automatically includes that context:

// Simple log call
logzai.info('Feature flag toggled', {
  featureName: 'dark-mode',
  enabled: true
})

// Automatically enriched with context from the browser plugin:
// - userId, userEmail (from Redux state)
// - orgId, orgName (from Redux state)
// - currentRoute (from window.location)
// - userAgent, viewport (from browser)

This pattern has several benefits:

  • Consistency: Every log has the same baseline context
  • DRY Principle: Write context logic once, not in every log statement
  • Error Context: Even automatically caught errors include this context
  • Zero Overhead: Context is injected at log time, not computed unnecessarily

The contextInjector is called for every log, so you can include dynamic information like the current route or selected organization that changes during the user's session.

4. Log User Actions and State Changes

Logging user interactions creates a breadcrumb trail that's invaluable for debugging:

// Component with action logging
const ProductPage = () => {
  const handleAddToCart = (product: Product) => {
    logzai.info('Product added to cart', {
      productId: product.id,
      productName: product.name,
      price: product.price,
      quantity: 1,
      source: 'product-page',
    })

    dispatch(addToCart(product))
  }

  const handleQuickView = (product: Product) => {
    logzai.debug('Quick view opened', {
      productId: product.id,
      trigger: 'hover',
    })

    setQuickViewProduct(product)
  }

  return (
    // ... component JSX
  )
}

For forms, log both submission and validation failures:

const ContactForm = () => {
  const handleSubmit = async (values: FormValues) => {
    // Validate
    const errors = validateForm(values)

    if (Object.keys(errors).length > 0) {
      logzai.warn('Form validation failed', {
        formName: 'contact',
        errors: Object.keys(errors),
        attemptNumber: submitAttempts + 1,
      })
      return
    }

    try {
      await api.submitContactForm(values)

      logzai.info('Contact form submitted successfully', {
        formName: 'contact',
        fieldsFilled: Object.keys(values),
      })
    } catch (error) {
      logzai.error('Contact form submission failed', {
        formName: 'contact',
        error: error.message,
      })
    }
  }

  return (
    // ... form JSX
  )
}

5. Performance Considerations

Logging should never slow down your app. Follow these guidelines:

Async by Default: logzai sends logs asynchronously, but avoid doing heavy computation in log statements:

// ❌ Bad: Expensive operation in log statement
logzai.debug('Component rendered', {
  largeArray: expensiveComputation(data), // Blocks UI
})

// ✅ Good: Only log what's necessary
logzai.debug('Component rendered', {
  dataSize: data.length,
})

Sampling in Production: For high-frequency events, use sampling to reduce volume:

const logScrollEvent = () => {
  // Only log 1% of scroll events
  if (Math.random() < 0.01) {
    logzai.debug('Page scrolled', {
      scrollPosition: window.scrollY,
      scrollDepth: calculateScrollDepth(),
    })
  }
}

Environment-Specific Verbosity: Use different log levels for development vs. production:

const isDevelopment = process.env.NODE_ENV === 'development'

const logComponentMount = (componentName: string) => {
  if (isDevelopment) {
    logzai.debug(`Component mounted: ${componentName}`)
  }
}

Error Handling Best Practices

1. Implement Error Boundaries

React Error Boundaries are your first line of defense against unhandled errors crashing your entire app. They catch errors in the component tree and allow you to log them and show fallback UI:

import React, { Component, ErrorInfo, ReactNode } from 'react'
import logzai from 'logzai-js/browser'

interface Props {
  children: ReactNode
  fallback?: ReactNode
}

interface State {
  hasError: boolean
  error?: Error
}

class ErrorBoundary extends Component<Props, State> {
  constructor(props: Props) {
    super(props)
    this.state = { hasError: false }
  }

  static getDerivedStateFromError(error: Error): State {
    return { hasError: true, error }
  }

  componentDidCatch(error: Error, errorInfo: ErrorInfo) {
    // Log to logzai with full context
    logzai.exception('React error boundary caught error', error, {
      errorType: 'react-error',
      componentStack: errorInfo.componentStack,
      errorMessage: error.message,
      errorStack: error.stack,
      pathname: window.location.pathname,
      timestamp: Date.now(),
    })
  }

  render() {
    if (this.state.hasError) {
      return this.props.fallback || (
        <div style={{ padding: '20px', textAlign: 'center' }}>
          <h2>Something went wrong</h2>
          <p>We've been notified and are looking into it.</p>
          <button onClick={() => window.location.reload()}>
            Reload Page
          </button>
        </div>
      )
    }

    return this.props.children
  }
}

export default ErrorBoundary

Where to Place Error Boundaries: Wrap strategic parts of your app to isolate failures:

// App-level boundary
const App = () => (
  <ErrorBoundary>
    <Router>
      <Routes />
    </Router>
  </ErrorBoundary>
)

// Feature-level boundaries
const Dashboard = () => (
  <div>
    <ErrorBoundary fallback={<WidgetError />}>
      <RevenueWidget />
    </ErrorBoundary>

    <ErrorBoundary fallback={<WidgetError />}>
      <UsersWidget />
    </ErrorBoundary>

    <ErrorBoundary fallback={<WidgetError />}>
      <ActivityWidget />
    </ErrorBoundary>
  </div>
)

This approach ensures one broken widget doesn't take down the entire dashboard.

Note on Browser Plugin Integration: While the browser plugin automatically catches unhandled JavaScript errors and promise rejections, Error Boundaries are still essential. The browser plugin catches errors that escape React's component tree, while Error Boundaries catch errors within React components and allow you to show fallback UI. Together, they provide comprehensive error coverage:

  • Error Boundaries: Catch React component errors + show fallback UI + log with logzai.exception()
  • Browser Plugin: Catch all other JavaScript errors + unhandled promise rejections automatically

Both layers working together ensure nothing slips through the cracks.

2. Log Exceptions with Full Context

When logging exceptions, include everything needed to reproduce and debug the issue:

const fetchUserProfile = async (userId: string) => {
  try {
    const response = await api.get(`/users/${userId}`)
    return response.data
  } catch (error) {
    logzai.exception('Failed to fetch user profile', error, {
      // Error details
      errorMessage: error.message,
      errorCode: error.response?.status,

      // Request context
      userId,
      endpoint: `/users/${userId}`,

      // User context (automatically added by contextInjector)
      // - currentUserId, email
      // - orgId, orgName
      // - pathname

      // Additional debugging info
      retryCount: 0,
      timestamp: Date.now(),
    })

    throw error // Re-throw to let calling code handle
  }
}

The logzai.exception() method is specifically designed for errors and automatically extracts the stack trace and error details.

3. Handle Async Errors (Promise Rejections)

Good news: The browser plugin automatically captures unhandled promise rejections! Once you've enabled the browser plugin with logzai.plugin('browser', browserPlugin), all unhandled rejections are automatically logged with full context.

However, you should still handle promise rejections explicitly where possible for better control over error messages and recovery strategies:

const loadUserData = async () => {
  try {
    const [profile, settings, preferences] = await Promise.all([
      fetchProfile(),
      fetchSettings(),
      fetchPreferences(),
    ])

    return { profile, settings, preferences }
  } catch (error) {
    logzai.exception('Failed to load user data', error, {
      failedOperation: 'loadUserData',
      retryable: true,
    })

    // Show user-friendly error
    throw new Error('Unable to load your profile. Please try again.')
  }
}

4. API Error Handling Pattern

Centralize API error logging using interceptors. For axios:

import axios from 'axios'
import logzai from 'logzai-js/browser'

const apiClient = axios.create({
  baseURL: import.meta.env.VITE_API_URL,
})

// Request interceptor (log outgoing requests in debug mode)
apiClient.interceptors.request.use(
  (config) => {
    if (import.meta.env.DEV) {
      logzai.debug('API request', {
        method: config.method?.toUpperCase(),
        url: config.url,
        params: config.params,
      })
    }
    return config
  },
  (error) => {
    logzai.error('API request setup failed', {
      error: error.message,
    })
    return Promise.reject(error)
  }
)

// Response interceptor (log errors)
apiClient.interceptors.response.use(
  (response) => response,
  (error) => {
    const request = error.config

    logzai.error('API request failed', {
      method: request?.method?.toUpperCase(),
      url: request?.url,
      statusCode: error.response?.status,
      statusText: error.response?.statusText,
      errorMessage: error.message,
      responseData: error.response?.data,
      requestDuration: Date.now() - request?.metadata?.startTime,
    })

    return Promise.reject(error)
  }
)

export default apiClient

For fetch API:

const fetchWithLogging = async (url: string, options?: RequestInit) => {
  const startTime = Date.now()

  try {
    const response = await fetch(url, options)

    if (!response.ok) {
      const errorData = await response.text()

      logzai.error('Fetch request failed', {
        url,
        method: options?.method || 'GET',
        statusCode: response.status,
        statusText: response.statusText,
        errorData,
        duration: Date.now() - startTime,
      })

      throw new Error(`HTTP ${response.status}: ${response.statusText}`)
    }

    return response
  } catch (error) {
    logzai.exception('Fetch request error', error, {
      url,
      method: options?.method || 'GET',
      duration: Date.now() - startTime,
    })

    throw error
  }
}

5. User-Friendly Error Messages

Always separate what you log from what you show users:

class ErrorBoundary extends Component<Props, State> {
  // ... previous code

  componentDidCatch(error: Error, errorInfo: ErrorInfo) {
    // Log technical details
    logzai.exception('React error boundary caught error', error, {
      errorType: 'react-error',
      componentStack: errorInfo.componentStack,
      errorMessage: error.message,
      errorStack: error.stack,
    })
  }

  render() {
    if (this.state.hasError) {
      // Show friendly message to users
      return (
        <div className="error-container">
          <h2>Oops! Something went wrong</h2>
          <p>
            We've been notified and are working on a fix.
            In the meantime, try refreshing the page.
          </p>
          <button onClick={() => window.location.reload()}>
            Refresh Page
          </button>
        </div>
      )
    }

    return this.props.children
  }
}

Never expose stack traces, error codes, or technical jargon to end users—those belong in your logs, not your UI.

Advanced Patterns

1. Redux/State Management Integration

If you're using Redux, you can log state changes and actions with middleware:

import { Middleware } from '@reduxjs/toolkit'
import logzai from 'logzai-js/browser'

const logzaiMiddleware: Middleware = () => (next) => (action) => {
  // Only log in development or for specific action types
  if (
    import.meta.env.DEV ||
    ['auth/login', 'auth/logout', 'user/update'].includes(action.type)
  ) {
    logzai.debug('Redux action dispatched', {
      kind: 'redux',
      type: action.type,
      payload: action.payload,
      timestamp: Date.now(),
    })
  }

  return next(action)
}

// Add to your store configuration
export const store = configureStore({
  reducer: rootReducer,
  middleware: (getDefaultMiddleware) =>
    getDefaultMiddleware().concat(logzaiMiddleware),
})

When to Use: Enable this in production only for critical actions to avoid log spam. Use it freely in development for debugging.

2. Route Change Logging

Track user navigation to understand user journeys:

import { useEffect } from 'react'
import { useLocation } from 'react-router-dom'
import logzai from 'logzai-js/browser'

const RouteLogger = () => {
  const location = useLocation()

  useEffect(() => {
    logzai.info('Route changed', {
      pathname: location.pathname,
      search: location.search,
      hash: location.hash,
      timestamp: Date.now(),
    })
  }, [location])

  return null
}

// Use in your app
const App = () => (
  <Router>
    <RouteLogger />
    <Routes>
      {/* your routes */}
    </Routes>
  </Router>
)

3. Custom Hooks for Logging

Create reusable hooks to standardize logging patterns:

import { useCallback } from 'react'
import { useLocation } from 'react-router-dom'
import logzai from 'logzai-js/browser'

// Hook for logging user actions
export const useLogAction = () => {
  const location = useLocation()

  return useCallback((action: string, context?: Record<string, any>) => {
    logzai.info(action, {
      ...context,
      pathname: location.pathname,
      timestamp: Date.now(),
    })
  }, [location])
}

// Usage in components
const ShoppingCart = () => {
  const logAction = useLogAction()

  const handleCheckout = () => {
    logAction('Checkout initiated', {
      itemCount: cartItems.length,
      totalValue: calculateTotal(cartItems),
    })

    // ... checkout logic
  }

  return (
    // ... component JSX
  )
}

4. Error Recovery Strategies

Build error boundaries that can recover from errors:

interface Props {
  children: ReactNode
  maxRetries?: number
}

interface State {
  hasError: boolean
  error?: Error
  retryCount: number
}

class RecoverableErrorBoundary extends Component<Props, State> {
  constructor(props: Props) {
    super(props)
    this.state = { hasError: false, retryCount: 0 }
  }

  static getDerivedStateFromError(error: Error): Partial<State> {
    return { hasError: true, error }
  }

  componentDidCatch(error: Error, errorInfo: ErrorInfo) {
    logzai.exception('Recoverable error boundary caught error', error, {
      errorType: 'react-error',
      componentStack: errorInfo.componentStack,
      retryCount: this.state.retryCount,
      maxRetries: this.props.maxRetries || 3,
    })
  }

  handleRetry = () => {
    const { maxRetries = 3 } = this.props
    const newRetryCount = this.state.retryCount + 1

    if (newRetryCount <= maxRetries) {
      logzai.info('Error boundary retry attempt', {
        retryCount: newRetryCount,
        maxRetries,
      })

      this.setState({
        hasError: false,
        error: undefined,
        retryCount: newRetryCount
      })
    } else {
      logzai.warn('Error boundary max retries reached', {
        retryCount: newRetryCount,
        maxRetries,
      })
    }
  }

  render() {
    const { hasError, retryCount } = this.state
    const { maxRetries = 3 } = this.props

    if (hasError) {
      return (
        <div className="error-container">
          <h2>Something went wrong</h2>
          {retryCount < maxRetries ? (
            <>
              <p>Would you like to try again?</p>
              <button onClick={this.handleRetry}>
                Try Again ({maxRetries - retryCount} attempts remaining)
              </button>
            </>
          ) : (
            <>
              <p>We've exhausted retry attempts. Please refresh the page.</p>
              <button onClick={() => window.location.reload()}>
                Refresh Page
              </button>
            </>
          )}
        </div>
      )
    }

    return this.props.children
  }
}

This pattern is useful for components that might fail due to temporary issues (network glitches, race conditions, etc.).

Production Checklist

Before deploying your logging and error handling to production, verify these items:

Configuration

  • [ ] Use environment-specific ingest tokens (different for dev/staging/prod)
  • [ ] Set environment field correctly based on NODE_ENV
  • [ ] Configure appropriate log levels (debug in dev, info/warn/error in prod)
  • [ ] Set mirrorToConsole to false in production
  • [ ] Enable browser plugin with logzai.plugin('browser', browserPlugin)
  • [ ] Configure contextInjector to include user, org, and route context
  • [ ] Set up errorFilter to exclude benign errors (ResizeObserver, etc.)

Testing

  • [ ] Test error boundaries with intentionally thrown errors
  • [ ] Verify browser plugin captures unhandled JavaScript errors automatically
  • [ ] Test that unhandled promise rejections are logged by the browser plugin
  • [ ] Verify logs appear in logzai dashboard with correct context
  • [ ] Verify contextInjector is working (check logs include userId, orgId, route)
  • [ ] Test errorFilter excludes errors you want to skip
  • [ ] Check that sensitive data is NOT being logged

Monitoring

  • [ ] Set up alerts for high error rates
  • [ ] Create dashboards for key metrics (error rates by route, user actions)
  • [ ] Configure notification channels (email, Slack, PagerDuty)
  • [ ] Review logs regularly to identify patterns

Privacy & Security

  • [ ] Never log passwords, tokens, or API keys
  • [ ] Sanitize PII (email, phone, address) before logging
  • [ ] Review data retention policies
  • [ ] Ensure compliance with GDPR/CCPA if applicable

Performance

  • [ ] Verify bundle size impact (logzai-js is ~15KB gzipped)
  • [ ] Use log sampling for high-frequency events
  • [ ] Avoid logging large objects or arrays
  • [ ] Profile your app to ensure logging doesn't impact performance

Common Pitfalls to Avoid

Over-Logging

Logging every single action creates noise that obscures real issues:

// ❌ Bad: Too much noise
const MyComponent = () => {
  logzai.debug('MyComponent rendering')
  logzai.debug('Props received', props)
  logzai.debug('State initialized')

  const handleClick = () => {
    logzai.debug('Button clicked')
    logzai.debug('State before update', state)
    setState(newState)
    logzai.debug('State after update', newState)
  }

  logzai.debug('MyComponent render complete')
  return <button onClick={handleClick}>Click me</button>
}

Instead, log meaningful events:

// ✅ Good: Signal over noise
const MyComponent = () => {
  const handleClick = () => {
    logzai.info('Important action triggered', {
      actionType: 'submit-form',
      formData: sanitizedData,
    })
    setState(newState)
  }

  return <button onClick={handleClick}>Click me</button>
}

Under-Logging

The opposite problem: not logging enough context to be useful:

// ❌ Bad: Not enough context
logzai.error('API call failed')
// ✅ Good: Actionable context
logzai.error('API call failed', {
  endpoint: '/api/users',
  method: 'POST',
  statusCode: 500,
  errorMessage: error.message,
  requestId: response.headers['x-request-id'],
})

Synchronous Logging in Hot Paths

Don't perform expensive operations in frequently-called code:

// ❌ Bad: Expensive operation in render
const ProductList = ({ products }) => {
  logzai.debug('Rendering products', {
    products: products.map(p => ({ // Expensive!
      id: p.id,
      name: p.name,
      fullDetails: JSON.stringify(p), // Very expensive!
    }))
  })

  return <div>{/* render products */}</div>
}
// ✅ Good: Log only what's necessary
const ProductList = ({ products }) => {
  logzai.debug('Rendering products', {
    productCount: products.length,
    firstProductId: products[0]?.id,
  })

  return <div>{/* render products */}</div>
}

Logging Sensitive Data

Never log passwords, tokens, credit cards, or other sensitive information:

// ❌ Bad: Logging sensitive data
logzai.info('User logged in', {
  email: user.email,
  password: user.password, // NEVER DO THIS!
  creditCard: user.paymentMethod.cardNumber, // NEVER DO THIS!
})
// ✅ Good: Sanitized logging
logzai.info('User logged in', {
  userId: user.id,
  email: user.email,
  hasPaymentMethod: !!user.paymentMethod,
  loginMethod: 'password',
})

Ignoring Error Boundaries

Don't let errors crash your entire app:

// ❌ Bad: No error boundary
const App = () => (
  <Router>
    <Routes>
      <Route path="/" element={<Dashboard />} />
      {/* One error here crashes everything */}
    </Routes>
  </Router>
)
// ✅ Good: Strategic error boundaries
const App = () => (
  <ErrorBoundary>
    <Router>
      <Routes>
        <Route path="/" element={
          <ErrorBoundary fallback={<DashboardError />}>
            <Dashboard />
          </ErrorBoundary>
        } />
      </Routes>
    </Router>
  </ErrorBoundary>
)

Generic Error Messages

Don't make debugging harder with vague errors:

// ❌ Bad: Generic error
throw new Error('Something went wrong')
// ✅ Good: Specific error
throw new Error(
  `Failed to load user profile: ${error.message} (User ID: ${userId})`
)

Conclusion

Implementing robust logging and error handling in React applications isn't optional—it's essential for building reliable, maintainable software. Let's recap the key takeaways:

  • Enable the browser plugin for automatic error capture—it catches JavaScript errors and promise rejections with zero extra code
  • Use structured logging with rich context via the contextInjector to make logs searchable and actionable
  • Implement error boundaries at strategic points to prevent cascading failures and show fallback UI
  • Log at appropriate levels to maintain signal-to-noise ratio
  • Handle specific errors explicitly for better control, while the browser plugin catches everything else
  • Separate technical logs from user-facing messages to maintain good UX
  • Test your error handling to ensure it works when things go wrong
  • Monitor and alert on error patterns to catch issues proactively

Polly – Amazon Polly Convert Text into Natural Speech Using AWS

2025-12-19 00:42:29

Description:

Learn how Amazon Polly, an AWS service, converts text into lifelike speech using artificial intelligence.
tags: aws, cloud, ai, devops

Amazon Polly – Text to Speech Made Easy

In modern applications, voice-based interaction is becoming very common.

From virtual assistants to accessibility tools, speech plays a major role.

Amazon Polly is an AWS service that helps convert text into natural-sounding speech.

What is Amazon Polly?

Amazon Polly is a Text-to-Speech (TTS) service provided by Amazon Web Services (AWS).

It uses advanced machine learning and deep learning technologies to generate realistic human voices.

With Polly, developers can easily add speech capability to their applications.

Why Use Amazon Polly?

Traditional voice recording:

  • Time-consuming
  • Difficult to update
  • Not scalable

With Amazon Polly:

  • No manual voice recording
  • Multiple languages & voices
  • Scalable & fast
  • Pay only for what you use

How Amazon Polly Works

  1. Input text is provided to Polly
  2. Select language and voice
  3. Polly converts text into speech
  4. Output is an audio file (MP3, WAV, etc.)

Key Features of Amazon Polly

  • Supports 30+ languages
  • Multiple male & female voices
  • Neural Text-to-Speech (NTTS)
  • Real-time and batch processing
  • Supports SSML (Speech Synthesis Markup Language)

Real-Time Use Cases

Amazon Polly is widely used in:

  • Voice assistants
  • E-learning platforms
  • Accessibility applications
  • News readers
  • IVR systems

Amazon Polly in AWS Ecosystem

Amazon Polly integrates well with:

  • AWS Lambda
  • Amazon S3
  • Amazon Transcribe
  • Amazon Lex

This makes it ideal for serverless and AI-based applications.

Advantages of Amazon Polly

  • Improves user experience
  • Easy to integrate using AWS SDKs
  • No infrastructure management
  • Highly reliable and scalable

Limitations

  • Requires internet connectivity
  • Cost depends on characters processed
  • Advanced voices cost slightly more

Conclusion

Amazon Polly is a powerful AWS service that brings voice to applications.

It helps developers build interactive, accessible, and intelligent systems with minimal effort.

For anyone exploring AWS, Cloud, or AI services, Amazon Polly is worth learning.

My sincere thanks to @santhoshnc for giving this assignment, which helped me learn about Amazon Polly and AWS services.

🔗 References

HashiCorp Vault: A Core Security Tool in DevSecOps

2025-12-19 00:40:16

HashiCorp Vault: A Core Security Tool in DevSecOps

As organizations increasingly adopt cloud computing and DevOps practices, securing sensitive data has become a major challenge. Traditional security approaches are no longer sufficient in fast-paced development environments. This challenge has led to the adoption of DevSecOps, which integrates security into every phase of the DevOps lifecycle. One important tool that supports this approach is HashiCorp Vault.

Overview of HashiCorp Vault

HashiCorp Vault is a secrets management and data protection tool designed to securely store, manage, and control access to sensitive information such as passwords, API keys, tokens, and certificates. Instead of hard-coding secrets into application code or configuration files, Vault provides a centralized and secure solution for managing them.

Key Features

  • Secure storage and encryption of sensitive data
  • Dynamic secrets generation with limited lifetime
  • Role-Based Access Control (RBAC)
  • Audit logging to track access
  • Automatic secret rotation
  • Integration with CI/CD pipelines and cloud platforms

Role in DevOps and DevSecOps

In a DevOps environment, HashiCorp Vault enables secure automation by allowing applications and services to retrieve secrets at runtime without exposing them in source code.

In a DevSecOps workflow, Vault supports the shift-left security model by embedding security controls early in the development process. It reduces the risk of credential leakage and strengthens security across continuous integration and continuous deployment pipelines.
Digital illustration of HashiCorp Vault, showing a secure vault icon in the center with cloud servers and encrypted data streams, glowing in blue and green tones, representing secure information management

Programming Languages Supported

HashiCorp Vault provides APIs and SDKs that support multiple programming languages, including:

  • Go
  • Python
  • Java
  • JavaScript
  • Ruby

This allows seamless integration with different applications and platforms.

Parent Company

HashiCorp Vault is developed and maintained by HashiCorp, a company known for its cloud infrastructure and security automation tools such as Terraform, Consul, and Packer.

Licensing Model

  • Vault Community Edition is open source and free to use.
  • Vault Enterprise Edition is a paid version that offers advanced security features and enterprise-level support.

Conclusion

HashiCorp Vault is a fundamental tool in the DevSecOps ecosystem. By securely managing secrets and integrating seamlessly with DevOps pipelines, it helps organizations build secure, scalable, and reliable cloud-native applications. For students learning AWS cloud-driven DevOps, understanding HashiCorp Vault is essential.

I would like to express my sincere thanks to @santhoshnc Sir for his valuable guidance and support.

#0 Intro

2025-12-19 00:39:17

topic) 기나'길' 항해의 출항을 선원들에게 안내하며.

My blog offers an English version. Please check it out by clicking the link below.
For ENG VER.

Content

- 1. Who R Ü(ed-ing-Be going to)

- 2. Blog Operation Plan

- 3. In the Future

Keywords = ' '

-

-

Abstract = ' '

1. Who R Ü(ing-Be going to)

만나서 반갑습니다. 저는 Luciano 입니다.

TechBlog를 쓰게 된 배경으로는 기억 할 수 있는 정보는 한정적이지만,
기록을 해둔다면 한정적인 기억에 연장선을 추가할 수 있으리라 생각하여
시작하게 되었습니다.

Dev.to에서의 모든 글은 한글로 작성됩니다.
그러나 Dev.to에 영어를 쓰는 사용자 분들이 많은 점을 고려하여
상단 링크를 통해 모든 포멧은 동일하지만, 글 작성만을 영어로 하여
동일한 내용을 모든 분들께 일관성 있게 전할 계획입니다.

1-1.ing

MIT Open Course에서 제공 해준 6.001 강의로 공부를 하고 있습니다

또한 arxiv에 게재 할 수학&컴퓨팅에 관련된 논문 아이디어가 있어서
함께 병행을 하며 연구를 진행하고 있습니다.

마지막으로 대한민국에서 출시해보고 싶은 서비스가 존재하여
독자적으로 개발을 진행 중에 있습니다.

우선순위를 매기자면 6.001 >>> 논문 프로젝트 > etc 일 것 같습니다.
6.001 코드 말곤 나머지는 찾아볼 수 없을겁니다.

아직 세상에 펼치지 않았습니다.

1-2. Be going to

계획은 다음과 같습니다.

MIT 6.0001 -> MIT 6.0002 & 18.01SC -> Harvard CS50 & MIT 6.042 -> MIT 6.006 -> MIT 6.046 -> MIT 6.036

3주 후 6.001 커리큘럼이 끝나게 되고, 앞서 설명 드린 계획은
2026년 말 혹은 2027년 초까지 완주 할 계획입니다.

cf) 끝내는 것에만 의의를 두지 않고, 제대로 된 이해와 결국 수강 후
제게 남게 되는 것에 집중하고자 합니다.

2. Plan Blogging

이 글의 제목을 기억하시나요?
#0 Intro 입니다.

Intro 많은 걸 의미하곤 합니다
때로는 유튜브의 Intro
때로는 영화에서의 Intro
또 때로는 음악에서 앨범에서의 Intro

음악에서 앨범의 구조에서 영감을 받았습니다.
음악에서 앨범은 두 종류로 나뉘곤 합니다.
싱글 앨범, ~싱글 앨범(=> 정규 앨범)

정규 앨범의 구조는 주로 다음과 같습니다.

Intro - .. - title - .. - Outro

#1 python
#2 Data Science

....To be revealed later....

3. In the Future

최종 목표는 인공지능에게 자율적 사고를 구현하는 것 입니다.
인공지능에게 인류의 뇌를 구현하고 싶습니다.

불가능에 가까운 영역이라는 점은 알고 있으나,
그 점이 매력적으로 다가오는 것 같습니다.

EECS PhD 과정을 선망합니다.
인공지능이라는 수학에서의 극한처럼 보이지 않는 영역임에도
최전선에서 연구를 하는 모든 분들을
존경합니다.

fin) Conclusion =' '

References= ' '

저는 아직 배우고 성장하는 중입니다. 혹시 오류나 더 자세한 설명이 필요한 부분이 있다면 알려주시면 감사하겠습니다.
I'm still learning and growing. If you spot any errors or need further explanation, please let me know. Thank you

🚀 Production-Ready FastAPI Template (Python)

2025-12-19 00:39:17

Depois de anos desenvolvendo APIs e sistemas backend em produção, organizei e liberei o template que uso como base para projetos reais.

O objetivo é simples: começar novos produtos já com arquitetura sólida, segurança, observabilidade e produtividade desde o primeiro commit.

🧩 O que este template entrega:

🔐 Identity & Security

  • Users, Roles & Permissions
  • JWT (access + refresh tokens)
  • Redis para blacklist de tokens, cache e rate limit
  • Login Activity tracking (auditoria de acessos)
  • Middlewares de autenticação e autorização

🗄️ MongoDB bem estruturado

  • Base model reutilizável (base_mongo_model)
  • Repositórios separados (Query / Command)
  • Índices, views e modelos organizados por domínio

⚙️ Configurações do sistema

  • Basic Configurations
  • Company Configurations
  • Feature Flags nativas

📦 Storage

  • MinIO (S3-compatible)
  • Google Cloud Storage
  • Filesystem local
  • Upload/download seguro
  • Gestão de user attachments

📨 Emails

  • Envio de emails transacionais
  • Templates HTML prontos
  • Estrutura simples para novos tipos de email

📊 Observabilidade & Infra

  • Docker & Docker Compose
  • Logging estruturado
  • Tracing com OpenTelemetry / Jaeger
  • Métricas prontas para Prometheus
  • Dashboards Grafana

🧰 Features enterprise

  • Feature Flags por ambiente
  • Rate limiting
  • CORS configurável
  • Healthcheck completo

    • MongoDB
    • Redis
    • Storage
    • CPU, RAM e disco

🧱 Arquitetura limpa

  • Código fácil de ler e manter
  • Arquitetura direta ao ponto (sem overengineering)
  • Organização por domínio (identity, configurations, files, etc.)
  • Separação clara: controllers, services, repositories, schemas e models

🛠️ Produtividade

  • Makefile automatizado

    • comandos Docker
    • comandos Python
    • setup rápido de ambiente
  • Boilerplate mínimo, foco no que importa

🎯 Ideal para:

  • Startups que querem lançar rápido sem dívida técnica
  • Empresas que precisam padronizar FastAPI
  • Devs backend que trabalham com sistemas reais
  • APIs, microserviços, SaaS e ERPs

🔗 Repositório (Open Source):
👉 https://github.com/ortiz-python-templates/python-mongodb-api

Feedbacks e contribuições são muito bem-vindos 🚀

🔒 Aviso (importante)

Template pensado para produção real: código legível, arquitetura objetiva e ferramentas que aceleram o dia a dia do backend.

Formatting PHP Code with PHP CS Fixer

2025-12-19 00:37:43

Introduction

Maintaining a consistent code style is a key aspect of software and web development. It improves code readability, reduces developers' cognitive load, and ensures everyone on the team sticks to the same standards.

But manually formatting code style is time-consuming, tedious, and error-prone. It's possible to format your code automatically using your integrated development environment (IDE) or text editor, but this approach has its limitations. Not all team members may use the same tools, and IDE configurations can vary widely. So it's usually better to use a dedicated code style tool that can be run consistently across different environments and in your continuous integration (CI) pipeline.

One of the tools you can use for formatting your PHP code is PHP CS Fixer. It's an incredibly popular tool and, at the time of writing, has over 214 million downloads on Packagist.

In this article, we'll look at how to install and configure PHP CS Fixer. We'll then explore how to run it locally and in parallel mode. Finally, I'll show you how to create a GitHub Actions workflow to automate code style checks on pull requests using PHP CS Fixer.

Installing PHP CS Fixer

To get started with PHP CS Fixer, you'll first need to install it via Composer. You can do this by running the following command in your project's root directory.

composer require friendsofphp/php-cs-fixer --dev

Create the Config File

After it finishes installing, you'll then need to create a configuration file for PHP CS Fixer. This file allows you to specify the coding standards you want to enforce, the directory you want to run the tool against, and any custom rules you may have.

To do this, you'll need to create a .php-cs-fixer.dist.php file in the root directory of your project.

Let's look at an example configuration file:

declare(strict_types=1);

use PhpCsFixer\Config;
use PhpCsFixer\Finder;

return (new Config())
    ->setRules([
        '@PSR12' => true,
    ])
    ->setFinder((new Finder())->in(__DIR__));

In the config file above, we are specifying that we want to use the PSR-12 coding standard and that we want to scan all files in the current directory (and child directories) for code style issues.

If you want to run the tool against specific directories, you can modify the setFinder method. For example, let's imagine we only want to run the tool against the src and tests directories. We can do this by updating the setFinder method as follows:

declare(strict_types=1);

use PhpCsFixer\Config;
use PhpCsFixer\Finder;

return (new Config())
    ->setRules([
        '@PSR12' => true,
    ])
    ->setFinder((new Finder())->in([
        __DIR__ . '/src',
        __DIR__ . '/tests',
    ]));

There's a ton of configuration options and rules available for PHP CS Fixer, and I won't dive into them all here. But I'd recommend checking out the documentation on GitHub so you can find any other rules you may wish to use: https://github.com/PHP-CS-Fixer/PHP-CS-Fixer/blob/master/doc/rules/index.rst.

Formatting Code with PHP CS Fixer

Now that we have PHP CS Fixer installed and configured, we can run it against our codebase.

Typically, there are two commands you'll want to use:

  • ./vendor/bin/php-cs-fixer check - This command checks your code for any style issues based on the rules defined in your configuration file. It will output a list of files that do not comply with the specified coding standards.
  • ./vendor/bin/php-cs-fixer fix - This command automatically fixes any code style issues found in your codebase based on the rules defined in your configuration file.

So if we wanted to run the check command to see if there are any code style issues in our project, we would run the following command:

vendor/bin/php-cs-fixer check

And this would generate output similar to the following:

❯ ./vendor/bin/php-cs-fixer check
PHP CS Fixer 3.89.1 Folding Bike by Fabien Potencier, Dariusz Ruminski and contributors.
PHP runtime: 8.4.2
Running analysis on 1 core sequentially.
You can enable parallel runner and speed up the analysis! Please see usage docs for more information.
Loaded config default from "/Users/ashallen/www/open-source/packages/email-utilities/.php-cs-fixer.dist.php".
Using cache file ".php-cs-fixer.cache".
 20/20 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%

   1) tests/Feature/Email/DomainIsNotTest.php
   2) src/Email.php

Found 2 of 20 files that can be fixed in 0.026 seconds, 20.00 MB memory used

Similarly, to automatically fix the issues found, we can run the following command:

vendor/bin/php-cs-fixer fix

Running this command would fix any of the code style issues found in our project and generate an output like so:

❯ ./vendor/bin/php-cs-fixer fix  
PHP CS Fixer 3.89.1 Folding Bike by Fabien Potencier, Dariusz Ruminski and contributors.
PHP runtime: 8.4.2
Running analysis on 1 core sequentially.
You can enable parallel runner and speed up the analysis! Please see usage docs for more information.
Loaded config default from "/Users/ashallen/www/open-source/packages/email-utilities/.php-cs-fixer.dist.php".
Using cache file ".php-cs-fixer.cache".
 20/20 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%

   1) tests/Feature/Email/DomainIsNotTest.php
   2) src/Email.php

Fixed 2 of 20 files in 0.029 seconds, 20.00 MB memory use

Running PHP CS Fixer in Parallel Mode

PHP CS Fixer allows running checks and fixes in parallel. This can significantly speed up the process, especially for larger codebases.

You can enable parallel mode by updating your configuration file to include the setParallelConfig method. Here's an example of how to do this:

declare(strict_types=1);

use PhpCsFixer\Config;
use PhpCsFixer\Finder;

return (new Config())
    ->setRules([
        '@PSR12' => true,
    ])
    ->setFinder((new Finder())->in(__DIR__))
    ->setParallelConfig(PhpCsFixer\Runner\Parallel\ParallelConfigFactory::detect());

In the example above, we're letting PHP CS Fixer automatically detect the optimal parallel configuration for our system. But you can manually specify these details if you wish.

Running ./vendor/bin/php-cs-fixer fix will then run the fixer in parallel mode. You should see output similar to the following:

❯ ./vendor/bin/php-cs-fixer fix  
PHP CS Fixer 3.89.1 Folding Bike by Fabien Potencier, Dariusz Ruminski and contributors.
PHP runtime: 8.4.2
Running analysis on 7 cores with 10 files per process.
Parallel runner is an experimental feature and may be unstable, use it at your own risk. Feedback highly appreciated!
Loaded config default from "/Users/ashallen/www/open-source/packages/email-utilities/.php-cs-fixer.dist.php".
Using cache file ".php-cs-fixer.cache".
 20/20 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%

   1) src/Email.php

Fixed 1 of 20 files in 0.210 seconds, 20.00 MB memory used

Running PHP CS Fixer using GitHub Actions

A great way to ensure that your code style remains consistent across your team is to automate the code style checks. Doing this helps catch any instances where you forget to run the code formatter locally before pushing your changes.

To automate this process, I like to use a GitHub Actions workflow that runs on every pull request. Since PHP CS Fixer returns a non-zero exit code (indicating failure) if any code style issues are found, we can rely on this to fail the workflow.

Let's look at a typical GitHub Actions workflow you may want to use:

name: Code Style

on:
  pull_request:

jobs:
  php-cs-fixer:
    runs-on: ubuntu-latest

    steps:
      - name: Update apt
        run: sudo apt-get update --fix-missing

      - name: Checkout code
        uses: actions/checkout@v2

      - name: Setup PHP
        uses: shivammathur/setup-php@v2
        with:
          php-version: 8.4
          coverage: none

      - name: Install dependencies
        run: |
          composer install

      - name: Run PHP CS Fixer
        run: ./vendor/bin/php-cs-fixer check

In the example above, we can see that we're running the ./vendor/bin/php-cs-fixer check command as part of the workflow. If any code style issues are found, the command will return a non-zero exit code, causing the workflow to fail. If the command passes, it will return a zero exit code, and the workflow will succeed.

Should I Fix Issues Automatically in the Workflow?

You may want to run the fix command and automatically commit the changes to the pull request. However, I generally avoid this approach because it can introduce unexpected changes to the codebase without my knowledge. Instead, I prefer to run the check command and then fix any issues locally. This way, I have full control over the changes being made to the codebase.

This is only my personal opinion, though, and you may find that automatically fixing issues in the workflow works better for your team. If you do want to fix the issues in your workflow automatically, you might want to consider opening a pull request via your workflow with the updated code style changes instead of committing them directly to the branch. This way, you can review the changes before they are merged into the main codebase.

Conclusion

In this article, we explored how to set up and use PHP CS Fixer for maintaining code style in your PHP projects. We covered installation, configuration, running it locally and in parallel mode, and automating code style checks using GitHub Actions. By integrating PHP CS Fixer into your development workflow, you can ensure consistent code quality and adherence to coding standards across your projects.

If you enjoyed reading this post, you might be interested in checking out my 220+ page ebook "Battle Ready Laravel" which covers similar topics in more depth.

Or, you might want to check out my other 440+ page ebook "Consuming APIs in Laravel" which teaches you how to use Laravel to consume APIs from other services.

If you're interested in getting updated each time I publish a new post, feel free to sign up for my newsletter.

Keep on building awesome stuff! 🚀