Marlon Castillo Logo

#AQuickRantAbout

Intro to Singletons

Published June 23, 2023

MarlonAvatar

by Marlon

Recently I had to solve a problem I hadn't encountered before. One of the builds I worked on offloads a background process to a queue and I had to implement some logging for this process.

The easiest solution would be to use a tool like Sentry to implement logging. The team was trying to limit the number of third-party tools in this build, and we had already implemented an image CDN, a cloud search provider, and a lightweight CSV parser.

Instead, we decided to create our logger class, which brought its own set of challenges.

Sleeping Processes

My solution was part of a custom plugin for this site build, so my first instinct was to create some helper functions to log a message to a unique file every time the process was triggered.

This is the first problem: the background process solution we were using would split the queue into batches. When a batch calls the logging functions, they create a new file despite the new logs being part of the same run.

I decided to give the plugin file less responsibility by creating a ProcessLogger class. However, it would still run into the same issue above because a new instance was being created for every batch.

I came across the singleton pattern when researching solutions to this problem. Essentially, this design pattern allows you to ensure that a class has only one instance.

The Nitty Gritty

The entire process gets saved to the database as an object with a prefix and an action. On a pre-determined interval, the object is pulled from the database using the prefix and action key, and the batch gets processed within a timeframe.

If the batch takes longer than this timeframe (the default is 30 seconds, but it can be customized), then it is interrupted and the entire process gets saved again. Otherwise, it the batch is removed from the queue and the new process is saved to the database with the same prefix and key.

Each batch needs to be able to statically log a message using the ProcessLogger class to the same file for the current process. So, the background process also needed to be aware of the unique log filename corresponding to it.

An Inside View

The ProcessLogger class follows the singleton pattern which traditionally implements a getInstance method. For my purposes, though, I needed to be able to create an instance with a log filename if one wasn't already running.

ProcessLogger.php
      <?php

use Traits\Process;

/**
 * Process logger
 */
class ProcessLogger {
	use Process;

	/**
	 * The instance of the class
	 *
	 * @var ProcessLogger|null
	 */
	private static $instance;

	/**
	 * The log file name
	 *
	 * @var string|null
	 */
	private $log_file;

	/**
	 * Filename prefix
	 *
	 * @var string
	 */
	private $prefix;

	/**
	 * File extension
	 *
	 * @var string
	 */
	private $extension = '.log';

	/**
	 * Constructor
	 *
	 * @param string $path Expects a fully qualified path to the log file.
	 */
	final private function __construct( $path = '' ) {
		$this->prefix = $this->process_prefix . '_' . $this->process_action . '_';

		if ( ! empty( $path ) ) {
			$this->log_file = $path;
		}
	}

	/**
	 * Get instance
	 *
	 * @param string $path Fully qualified path to the log file.
	 * @return ProcessLogger
	 */
	public static function get_instance( $path = '' ) {
		if ( ! self::$instance ) {
			self::$instance = ( new ProcessLogger( $path ) )->start();
		}

		return self::$instance;
	}
}

    

My class's get_instance method checks the cached instance and if it's not an existing object, then it creates a new one.

This class inherits shared traits with the background process for the prefix and action.

Insights

The singleton pattern does violate the single responsibility principle because it solves two problems at the same time:

  • the class ensures that there is only one instance of itself at a time
  • the class provides a global access point to that instance

Despite this downside, it was an efficient solution to this problem as it was time-critical and resources were limited.

Since rolling it out, the team has used the logs frequently to troubleshoot potential problems. And we implemented the log files using JSON structure so that, in the future, we can implement some kind of user feedback by reading this data.