The Administrator’s Cookbook: A Definitive Guide to Running Backend Services with Docker and PowerShell

I. Introduction: The Modern Developer’s Local Environment

In modern software development, the phrase “it works on my machine” has become a notorious anti-pattern, signaling a critical divergence between a developer’s local setup and the production environment. Historically, developers installed services like PostgreSQL, MongoDB, or RabbitMQ directly onto their host operating systems. This practice leads to a host of predictable and costly problems: version conflicts between projects (e.g., Project A requires PostgreSQL 12 while Project B needs 16), complex and “dirty” installation and uninstallation processes, and a constant drain on system resources from background services that are not in active use.

Docker, as a containerization platform, provides the definitive solution to these challenges. By packaging an application with all its source code, core dependencies, tools, and libraries, a Docker image creates a standardized, portable unit. A container, which is a running instance of this image, operates in a completely isolated environment. This isolation ensures consistency; a “PostgreSQL 12” container runs identically on every developer’s machine, in CI/CD pipelines, and in production, effectively eliminating environment-specific bugs.

This model enables rapid provisioning and teardown. A developer can spin up a fully configured database in seconds using a docker run command and, just as quickly, destroy it with docker rm. This capability allows for ephemeral, per-task environments without “polluting” the host operating system, a practice that is ideal for testing, feature-branch development, and experimentation.

This guide focuses specifically on the modern Windows development workflow. The combination of PowerShell as the command-line interface and Docker Desktop as the container engine creates a powerful, hybrid-OS environment. Docker Desktop for Windows is the critical bridge that leverages the Windows Subsystem for Linux 2 (WSL2) to seamlessly connect the user’s Windows-based terminal to the robust, Linux-based container ecosystem. Since the vast majority of popular backend services—including PostgreSQL, Kafka, and Redis—are developed for and run natively on Linux, this architecture has become the standard for high-productivity development on Windows. This reference manual is designed to make that abstract bridge feel concrete, simple, and scriptable, providing a definitive “cookbook” for administrators and developers operating in a PowerShell environment.

II. Foundational Setup: Docker on the Windows Terminal

A. Prerequisites: Docker Desktop and WSL2

Before any docker commands can be run, the host system must be properly configured. The primary component is Docker Desktop for Windows.

A critical dependency for Docker Desktop is the Windows Subsystem for Linux 2 (WSL2). Docker Desktop uses WSL2 as its backend engine to run Linux containers, which requires the host machine to have virtualization capabilities enabled in the BIOS/UEFI.

The most common source of failure when installing Docker on a new Windows machine is a misconfigured or absent WSL2 environment. While in the past this required multiple manual steps to enable virtualization platforms and install distributions, the process is now streamlined.

The recommended installation method is to open a PowerShell terminal as Administrator and execute a single command:

PowerShell

# Install-WSL.ps1
wsl --install

This command automatically enables all necessary features (including the Virtual Machine Platform), downloads and installs the latest Linux kernel, installs a default Linux distribution (Ubuntu), and sets WSL2 as the default. After a system restart, Docker Desktop can be installed from its official installer. Once Docker Desktop is running, the docker command becomes available in PowerShell.

B. PowerShell Command Syntax

All commands in this guide are formatted for PowerShell. There are three key syntactical elements to note:

  1. Parameters: Scripts in this guide use Param() blocks to define default values. This allows you to run them as-is or override settings (e.g., .\Start-Postgres.ps1 -Password "my-new-secret").
  2. Host Paths: For mounting directories from the host (the local machine) into a container, this guide uses the PowerShell variable ${PWD}. This variable automatically resolves to the current working directory of the PowerShell terminal.
  3. Quoting: Environment variables (-e) that contain special characters, a common requirement for complex passwords, must be enclosed in double quotes ("...") to be interpreted correctly by PowerShell.

C. The Service Administrator’s Toolkit: Essential Commands

Creating services with docker run is only the first step. Managing their lifecycle requires a toolkit of essential Docker CLI commands. This table serves as a central reference for the management operations that will be used throughout this guide.

Table 2.1: Docker Lifecycle Management Commands (PowerShell)

CommandPurpose
docker --versionVerifies the Docker Engine version and client/server API versions.
docker compose versionVerifies the version of Docker Compose, which is included with Docker Desktop.
docker psLists all running containers, showing their names, IDs, and port mappings.
docker ps -aLists all containers, including those that are stopped or have exited due to an error.
docker logs <container_name>Shows the real-time log output (STDOUT/STDERR) from a container. This is the primary tool for debugging startup failures.
docker exec -it <container_name> bashOpens an interactive “bash” or “sh” shell inside a running container. This is invaluable for debugging, running CLIs, or inspecting the container’s filesystem.
docker stop <container_name>Stops a running container gracefully (sends SIGTERM, then SIGKILL).
docker rm <container_name>Removes a stopped container. The container must be stopped before it can be removed.
docker volume lsLists all named volumes that Docker is managing. This is where persistent data is stored.
docker volume pruneRemoves all unused volumes (i.e., volumes not currently attached to any container, running or stopped). This is a critical command for freeing disk space.

III. Analysis of the Core Request: PostgreSQL and pgAdmin

The provided commands for instantiating a PostgreSQL database and a pgAdmin web interface serve as an excellent, practical example of Docker’s utility. This section deconstructs these commands to establish the core patterns used in this guide.

A. The PostgreSQL Database (postgres)

The process correctly begins by creating a persistent storage location before creating the container.

Command 1: docker volume create...

PowerShell

# Create-PostgresVolume.ps1
Param(
    [string]$VolumeName = "pc1_postgres_data"
)
docker volume create $VolumeName
  • Analysis: This command explicitly creates a named volume. A named volume is a block of storage on the host machine that is fully managed by the Docker engine. Its key feature is that its lifecycle is decoupled from any container. When the pc1_postgres container is stopped and removed (with docker rm), this pc1_postgres_data volume remains, protecting all database files, tables, and data from being destroyed.

Command 2: docker run...

PowerShell

# Start-Postgres.ps1
Param(
    [string]$ContainerName = "pc1_postgres",
    [string]$Password = "admin",
    [int]$HostPort = 5432,
    [string]$VolumeName = "pc1_postgres_data",
    [string]$ImageTag = "latest"
)

# Trap Note: Postgres v18+ requires mounting to /var/lib/postgresql (not /data). See "Common Traps" section.
docker run --name $ContainerName -e POSTGRES_PASSWORD=$Password -d -p ${HostPort}:5432 -v ${VolumeName}:/var/lib/postgresql postgres:$ImageTag
  • --name $ContainerName: Assigns a unique, human-readable name to this specific container instance. This name can be used in other Docker commands (e.g., docker logs pc1_postgres).
  • -e POSTGRES_PASSWORD=$Password: This is the most critical environment variable for the official postgres image. The image’s entrypoint script requires this variable to be set. On first run, this script will initialize the database and set the password for the postgres superuser to this value. The container will fail to start if this is not provided.
  • -d: Runs the container in detached mode, meaning it runs in the background and returns control of the terminal.
  • -p ${HostPort}:5432: This publishes a port mapping in the format <HOST_PORT>:<CONTAINER_PORT>. It exposes the container’s internal port 5432 (the default for PostgreSQL) to port 5432 on the host machine. This is what allows host-based clients (like pgAdmin or DBeaver) to connect via localhost:5432.
  • -v ${VolumeName}:/var/lib/postgresql: This volume mapping connects the named volume pc1_postgres_data (from Step 1) to a specific directory inside the container.
  • postgres:$ImageTag: This is the name of the image to run. Docker will look for it locally, and if not found, pull it from Docker Hub.

A common and catastrophic mistake, especially with older tutorials, is to mount the volume to /var/lib/postgresql/data. This is no longer correct for postgres:latest (v18+). The new images are designed to manage data in version-specific subfolders and require the mount to be on the parent directory, /var/lib/postgresql. Using the old /data path will cause the container to fail on startup. This is a crucial recent change.

How to Access & Verify (PostgreSQL)

  1. Directly (via docker exec): To get a command-line interface to the database, one can open the psql client inside the running container.PowerShell
  1. # Access-Postgres-Shell.ps1 Param( [string]$ContainerName = "pc1_postgres", [string]$PostgresUser = "postgres" ) docker exec -it $ContainerName psql -U $PostgresUser This command executes a new process (psql) inside the pc1_postgres container.
  2. From Host (via Client): Using any standard GUI client (Azure Data Studio, DBeaver, etc.), connect using the following credentials:
    • Host: localhost
    • Port: 5432
    • User: postgres
    • Password: admin (or the value set for POSTGRES_PASSWORD)

B. The Web Interface (dpage/pgadmin4)

This command instantiates the web-based graphical tool to manage the database.

Command: docker run...

PowerShell

# Start-pgAdmin.ps1
Param(
    [string]$ContainerName = "pgadmin",
    [int]$HostPort = 8080,
    [string]$DefaultEmail = "your_email@example.com",
    [string]$DefaultPassword = "your_secure_password",
    [string]$ImageTag = "latest"
)

# Trap Note: Connecting to 'localhost' from pgAdmin will fail. See "Common Traps" section.
docker run --name $ContainerName -p ${HostPort}:80 -e "PGADMIN_DEFAULT_EMAIL=$DefaultEmail" -e "PGADMIN_DEFAULT_PASSWORD=$DefaultPassword" -d dpage/pgadmin4:$ImageTag
  • --name $ContainerName: A unique name for the pgAdmin container.
  • -p ${HostPort}:80: Publishes the container’s internal web server, which runs on port 80, to port 8080 on the host machine.
  • -e "PGADMIN_DEFAULT_EMAIL=...": This environment variable is required by the dpage/pgadmin4 image. It is not a PostgreSQL credential; it is used to create the initial administrative user for the pgAdmin web interface itself.
  • -e "PGADMIN_DEFAULT_PASSWORD=...": The password for the pgAdmin administrative user.
  • dpage/pgadmin4: The official, verified publisher image for pgAdmin 4.

How to Access & Verify (pgAdmin)

  1. Open a web browser on the host machine and navigate to http://localhost:8080.
  2. Log in to the web interface using the email and password provided in the -e flags (e.g., your_email@example.com and your_secure_password).

C. Linking Services: Connecting pgAdmin to Postgres

The final step is to use the pgAdmin web UI to connect to the pc1_postgres database container. There are two primary methods to accomplish this, revealing a core concept of Docker networking.

Method 1: The Host Port Method (Simple but Brittle)

This method relies on the port mappings established on the host.

  1. In the pgAdmin UI, select “Add New Server”.
  2. In the “Connection” tab, use the following credentials:
    • Host: host.docker.internal
    • Port: 5432
    • User: postgres
    • Password: admin

This works because host.docker.internal is a special DNS name that Docker provides to containers, which resolves to the host machine’s IP. The connection path is: Browser -> localhost:8080 -> pgAdmin Container -> (connects to host) -> host.docker.internal:5432 -> Postgres Container.

Method 2: The Docker Network Method (Robust & Recommended)

This method is more robust as it does not rely on host port publishing and instead uses Docker’s internal networking and DNS.

  1. First, create a user-defined bridge network:PowerShell
# Create-Docker-Network.ps1
Param(
    [string]$NetworkName = "my-dev-network"
)
docker network create $NetworkName

Stop and remove the existing containers (if they are running):

PowerShell

  1. # Stop-And-Remove-Containers.ps1 Param( [string]$ContainerNames = @("pc1_postgres", "pgadmin") ) docker stop $ContainerNames docker rm $ContainerNames
  2. Re-run both docker run commands (for Postgres and pgAdmin), but add the --network my-dev-network flag to each.
  3. Now, in the pgAdmin UI “Connection” tab, use:
    • Host: pc1_postgres (the --name of the container)
    • Port: 5432
    • User: postgres
    • Password: admin

This works because containers on the same user-defined network can resolve each other by their --name using Docker’s built-in DNS service. This is the recommended pattern as it mimics production environments.

A common misstep is to use localhost in the pgAdmin host field. This will fail. Inside a container, localhost (or 127.0.0.1) refers only to the container itself. When localhost is entered in the pgAdmin UI, the pgadmin container is attempting to connect to itself on port 5432, where no database exists. The host must be the container name (pc1_postgres) or host.docker.internal.

IV. The Developer’s Cookbook: An Exhaustive Command Reference

This section provides the “exhaustive list” of docker run commands for common backend services, formatted as single-line PowerShell scripts with parameters.


A. Relational Databases

1. MariaDB (mariadb)

  • Service/Image: MariaDB (a high-performance, community-forked drop-in replacement for MySQL)
  • Image: mariadb:latest

The mariadb image maintains backward compatibility with MYSQL_... prefixed environment variables from its MySQL heritage. However, the official, canonical variables are MARIADB_..., and these are the recommended standard for all new deployments.

  • PowerShell Scripts:PowerShell
# Create-MariaDBVolume.ps1
Param(
    [string]$VolumeName = "mariadb_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-MariaDB.ps1 Param( [string]$ContainerName = "mariadb-dev", [string]$RootPassword = "YOUR_SECRET_PASSWORD", [string]$AppName = "my_app", [string]$AppPassword = "USER_PASSWORD", [int]$HostPort = 3306, [string]$VolumeName = "mariadb_data", [string]$ImageTag = "latest" ) # Note: See "Common Traps" section for password persistence. docker run --name $ContainerName -e MARIADB_ROOT_PASSWORD=$RootPassword -e MARIADB_DATABASE="${AppName}_db" -e MARIADB_USER="${AppName}_user" -e MARIADB_PASSWORD=$AppPassword -v ${VolumeName}:/var/lib/mysql -p ${HostPort}:3306 -d mariadb:$ImageTag
  • Table 4.1: MariaDB Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique, friendly name for the container.
-e MARIADB_ROOT_PASSWORD-e MARIADB_ROOT_PASSWORD=...(Required) Sets the root superuser password for the database.
-e MARIADB_DATABASE-e MARIADB_DATABASE=...(Optional) Creates a new database schema on first startup.
-e MARIADB_USER-e MARIADB_USER=...(Optional) Creates a new non-root user. Must be used with MARIADB_PASSWORD.
-e MARIADB_PASSWORD-e MARIADB_PASSWORD=...(Optional) Sets the password for the new user created with MARIADB_USER.
-v-v ${VolumeName}:/var/lib/mysql(Required for Persistence) Maps the named volume to the container’s internal data directory.
-p-p ${HostPort}:3306Maps the host’s port 3306 to the container’s default MySQL/MariaDB port 3306.
-d-dRuns the container in detached (background) mode.
Imagemariadb:$ImageTagThe official MariaDB image from Docker Hub.
  • How to Access & Verify (MariaDB):
    1. Directly (via docker exec): Access the mariadb command-line client inside the container.PowerShell
    1. # Access-MariaDB-Shell.ps1 Param( [string]$ContainerName = "mariadb-dev" ) docker exec -it $ContainerName mariadb -u root -p (You will be prompted to enter YOUR_SECRET_PASSWORD).
    2. From Host (via Client): Use any MySQL-compatible GUI (DBeaver, MySQL Workbench, Beekeeper Studio).
      • Host: localhost
      • Port: 3306
      • User: root
      • Password: YOUR_SECRET_PASSWORD

2. MySQL (mysql)

  • Service/Image: MySQL Community Server
  • Image: mysql:latest
  • PowerShell Scripts:PowerShell
# Create-MySQLVolume.ps1
Param(
    [string]$VolumeName = "mysql_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-MySQL.ps1 Param( [string]$ContainerName = "mysql-dev", [string]$RootPassword = "YOUR_SECRET_PASSWORD", [string]$AppName = "my_app", [string]$AppPassword = "USER_PASSWORD", [int]$HostPort = 3307, # Defaulting to 3307 to avoid MariaDB conflict [string]$VolumeName = "mysql_data", [string]$ImageTag = "latest" ) # Note: See "Common Traps" section for password persistence. docker run --name $ContainerName -e MYSQL_ROOT_PASSWORD=$RootPassword -e MYSQL_DATABASE="${AppName}_db" -e MYSQL_USER="${AppName}_user" -e MYSQL_PASSWORD=$AppPassword -v ${VolumeName}:/var/lib/mysql -p ${HostPort}:3306 -d mysql:$ImageTag Note: This script uses -p 3307:3306 by default. This is intentional. It maps the container’s internal port 3306 to the host’s port 3307. This avoids a port conflict if a developer needs to run both MariaDB (on 3306) and MySQL (on 3307) simultaneously, a common scenario.
  • Table 4.2: MySQL Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique, friendly name for the container.
-e MYSQL_ROOT_PASSWORD-e MYSQL_ROOT_PASSWORD=...(Required) Sets the root superuser password.
-e MYSQL_DATABASE-e MYSQL_DATABASE=...(Optional) Creates a new database schema on first startup.
-e MYSQL_USER-e MYSQL_USER=...(Optional) Creates a new non-root user. Must be used with MYSQL_PASSWORD.
-e MYSQL_PASSWORD-e MYSQL_PASSWORD=...(Optional) Sets the password for the new user.
-v-v ${VolumeName}:/var/lib/mysql(Required for Persistence) Maps the named volume to the container’s internal data directory.
-p-p ${HostPort}:3306Maps the host’s port 3307 to the container’s 3306. Connect via localhost:3307.
-d-dRuns the container in detached (background) mode.
Imagemysql:$ImageTagThe official MySQL image from Docker Hub.
  • How to Access & Verify (MySQL):
    1. Directly (via docker exec): Access the mysql command-line client.PowerShell
    1. # Access-MySQL-Shell.ps1 Param( [string]$ContainerName = "mysql-dev" ) docker exec -it $ContainerName mysql -u root -p (You will be prompted for YOUR_SECRET_PASSWORD).
    2. From Host (via Client): Use MySQL Workbench or another client.
      • Host: localhost
      • Port: 3307 (as specified in the script’s default $HostPort)
      • User: root
      • Password: YOUR_SECRET_PASSWORD

3. Microsoft SQL Server (mcr.microsoft.com/mssql/server)

  • Service/Image: Microsoft SQL Server on Linux
  • Image: mcr.microsoft.com/mssql/server:2022-latest

This image is hosted on Microsoft’s own registry, the Microsoft Artifact Registry (mcr.microsoft.com). It requires the ACCEPT_EULA=Y environment variable and a complex MSSQL_SA_PASSWORD to be set, otherwise, the container will fail to start.

  • PowerShell Scripts:PowerShell
# Create-MSSQLVolume.ps1
Param(
    [string]$VolumeName = "mssql_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-MSSQL.ps1 Param( [string]$ContainerName = "mssql-dev", [string]$SAPassword = "Your_S@per_Str0ng_P@ssw0rd", [string]$ProductEdition = "Developer", [int]$HostPort = 1433, [string]$VolumeName = "mssql_data", [string]$Image = "mcr.microsoft.com/mssql/server:2022-latest" ) # Note: ACCEPT_EULA=Y is required. See "Common Traps" section for password persistence. docker run --name $ContainerName -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=$SAPassword" -e "MSSQL_PID=$ProductEdition" -v ${VolumeName}:/var/opt/mssql -p ${HostPort}:1433 -d $Image
  • Table 4.3: MS SQL Server Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique, friendly name for the container.
-e "ACCEPT_EULA"-e "ACCEPT_EULA=Y"(Required) You must set this to Y to accept the End-User License Agreement.
-e "MSSQL_SA_PASSWORD"-e "MSSQL_SA_PASSWORD=..."(Required) Sets the sa (System Admin) password. Must be a strong, complex password.
-e "MSSQL_PID"-e "MSSQL_PID=Developer"(Optional) Sets the Product ID (edition). Developer is the default and is free for development use. Other options: Express, Standard, etc..
-v-v ${VolumeName}:/var/opt/mssql(Required for Persistence) Maps the named volume to the container’s internal data directory.
-p-p ${HostPort}:1433Maps the host’s port 1433 to the container’s default SQL Server port 1433.
-d-dRuns the container in detached (background) mode.
Imagemcr.microsoft.com/...The full, official image path from the Microsoft Artifact Registry.
  • How to Access & Verify (MS SQL Server):
    1. From Host (GUI): This is the primary access method. Use Azure Data Studio or SQL Server Management Studio (SSMS).
    2. Connection Settings:
      • Server / Server Name: localhost, 1433 (some clients, like SSMS, may also work with 127.0.0.1, 1433)
      • Authentication Type: SQL Login
      • Login / User: sa
      • Password: Your_S@per_Str0ng_P@ssw0rd (the complex password you provided)

B. NoSQL and In-Memory Stores

1. MongoDB (mongo)

  • Service/Image: MongoDB Community Edition
  • Image: mongo:latest (The Docker Official Image)
  • PowerShell Scripts:PowerShell
# Create-MongoVolume.ps1
Param(
    [string]$VolumeName = "mongo_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-MongoDB.ps1 Param( [string]$ContainerName = "mongo-dev", [string]$RootUser = "YOUR_MONGO_USER", [string]$RootPassword = "YOUR_SECRET_PASSWORD", [int]$HostPort = 27017, [string]$VolumeName = "mongo_data", [string]$ImageTag = "latest" ) # Note: See "Common Traps" section for password persistence. docker run --name $ContainerName -e MONGO_INITDB_ROOT_USERNAME=$RootUser -e MONGO_INITDB_ROOT_PASSWORD=$RootPassword -v ${VolumeName}:/data/db -p ${HostPort}:27017 -d mongo:$ImageTag
  • Table 4.4: MongoDB Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique, friendly name for the container.
-e MONGO_INITDB_ROOT_USERNAME-e MONGO_INITDB_ROOT_USERNAME=...(Optional but Recommended) Creates a root user in the admin database on first initialization.
-e MONGO_INITDB_ROOT_PASSWORD-e MONGO_INITDB_ROOT_PASSWORD=...(Optional but Recommended) Sets the password for the root user. If MONGO_INITDB_ROOT_USERNAME is set, this is required.
-v-v ${VolumeName}:/data/db(Required for Persistence) Maps the named volume to the container’s internal data directory (/data/db). Other data lives in /data/configdb.
-p-p ${HostPort}:27017Maps the host’s port 27017 to the container’s default MongoDB port 27017.
-d-dRuns the container in detached (background) mode.
Imagemongo:$ImageTagThe official MongoDB image.
  • How to Access & Verify (MongoDB):
    1. Directly (via docker exec): Access the mongosh (Mongo Shell) client inside the container.PowerShell
    1. # Access-Mongo-Shell.ps1 Param( [string]$ContainerName = "mongo-dev" ) docker exec -it $ContainerName mongosh (If you set credentials, you will then need to authenticate: use admin; db.auth('YOUR_MONGO_USER', 'YOUR_SECRET_PASSWORD');).
    2. From Host (GUI/CLI): Use MongoDB Compass or a CLI.
    3. Connection String:
      • mongodb://YOUR_MONGO_USER:YOUR_SECRET_PASSWORD@localhost:27017/

2. Redis (redis)

  • Service/Image: Redis (in-memory key-value store)
  • Image: redis:latest

By default, a simple docker run redis command will run an in-memory-only instance. This script enables persistence by passing arguments to enable RDB snapshotting and mounting a volume.

  • PowerShell Scripts:PowerShell
# Create-RedisVolume.ps1
Param(
    [string]$VolumeName = "redis_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-Redis-Persistent.ps1 Param( [string]$ContainerName = "redis-dev", [int]$HostPort = 6379, [string]$VolumeName = "redis_data", [string]$ImageTag = "latest" ) docker run --name $ContainerName -v ${VolumeName}:/data -p ${HostPort}:6379 -d redis:$ImageTag redis-server --save 60 1 --loglevel warning
  • Table 4.5: Redis Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique, friendly name for the container.
-v-v ${VolumeName}:/data(Required for Persistence) Maps the named volume to the container’s internal data directory, /data.
-p-p ${HostPort}:6379Maps the host’s port 6379 to the container’s default Redis port 6379.
-d-dRuns the container in detached (background) mode.
Imageredis:$ImageTagThe official Redis image.
Commandredis-server --save 60 1...(Optional) This overrides the default container command. It tells the redis-server to save a snapshot of the DB to disk every 60 seconds if at least 1 write operation has occurred.
  • How to Access & Verify (Redis):
    1. Directly (via docker exec): Access the redis-cli (Command Line Interface) inside the container.PowerShell
# Access-Redis-CLI.ps1
Param(
    [string]$ContainerName = "redis-dev"
)
docker exec -it $ContainerName redis-cli

(Once inside, test with SET mykey "hello" and GET mykey).

From Host (CLI): If redis-cli is installed locally:

PowerShell

    1. # Connect-Redis-CLI-Host.ps1 redis-cli -h 127.0.0.1 -p 6379
    2. From Host (GUI): Use Redis Insight and connect to localhost:6379.

C. Messaging & Event Streaming

1. RabbitMQ (rabbitmq:management)

  • Service/Image: RabbitMQ (AMQP Message Broker)
  • Image: rabbitmq:3-management

The single most important detail is using the :management tag. The base rabbitmq:latest image does not include the web-based management UI.

  • PowerShell Scripts:PowerShell
# Create-RabbitMQVolume.ps1
Param(
    [string]$VolumeName = "rabbitmq_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-RabbitMQ.ps1 Param( [string]$ContainerName = "rabbitmq-dev", [string]$DefaultUser = "YOUR_RABBIT_USER", [string]$DefaultPass = "YOUR_SECRET_PASSWORD", [int]$HostPortAMQP = 5672, [int]$HostPortMgmt = 15672, [string]$VolumeName = "rabbitmq_data", [string]$ImageTag = "3-management" ) # Note: See "Common Traps" for password persistence. Using the :management tag is crucial. docker run --name $ContainerName -e RABBITMQ_DEFAULT_USER=$DefaultUser -e RABBITMQ_DEFAULT_PASS=$DefaultPass -v ${VolumeName}:/var/lib/rabbitmq/ -p ${HostPortAMQP}:5672 -p ${HostPortMgmt}:15672 -d rabbitmq:$ImageTag
  • Table 4.6: RabbitMQ Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique, friendly name for the container.
-e RABBITMQ_DEFAULT_USER-e RABBITMQ_DEFAULT_USER=...(Optional) Creates an admin user. If not set, defaults to guest.
-e RABBITMQ_DEFAULT_PASS-e RABBITMQ_DEFAULT_PASS=...(Optional) Sets the password for the user. If not set, defaults to guest.
-v-v ${VolumeName}:/var/lib/rabbitmq/(Required for Persistence) Maps the named volume to the container’s internal data directory.
-p 5672:5672-p ${HostPortAMQP}:5672Maps the AMQP protocol port. This is the port applications (producers/consumers) connect to.
-p 15672:15672-p ${HostPortMgmt}:15672Maps the Management UI web port.
-d-dRuns the container in detached (background) mode.
Imagerabbitmq:3-management(Crucial) The image tag that includes the web UI plugin by default.
  • How to Access & Verify (RabbitMQ):
    1. Web UI (Primary): Open a browser to http://localhost:15672.
    2. Login: Use the credentials set with the environment variables (e.g., YOUR_RABBIT_USER / YOUR_SECRET_PASSWORD), or the default guest / guest if no variables were set. Note: The default guest user can only log in from localhost.
    3. Application Connection (AMQP): Your application (e.g., Node.js, Java, Python) will connect to the protocol port 5672.
      • Connection String: amqp://YOUR_RABBIT_USER:YOUR_SECRET_PASSWORD@localhost:5672

2. Apache Kafka (Zookeeper-less / KRaft Mode)

  • Service/Image: Apache Kafka (Distributed Event Streaming Platform)
  • Image: bitnami/kafka:latest

This setup uses Kafka’s new KRaft mode, which removes the Zookeeper dependency entirely, allowing a single Kafka node to run in a “combined” mode.

  • PowerShell Scripts:PowerShell
# Create-KafkaVolume.ps1
Param(
    [string]$VolumeName = "kafka_data"
)
docker volume create $VolumeName

PowerShell

  • # Start-Kafka-KRaft.ps1 Param( [string]$ContainerName = "kafka-dev", [int]$HostPort = 9092, [string]$VolumeName = "kafka_data", [string]$ImageTag = "latest" ) # This complex script runs Kafka in KRaft (Zookeeper-less) mode. # Note: The KAFKA_CFG_CONTROLLER_QUORUM_VOTERS value must match the $ContainerName. # Note: KAFKA_CFG_ADVERTISED_LISTENERS is key for host access. See "Common Traps" section. docker run --name $ContainerName -p ${HostPort}:9092 -v ${VolumeName}:/bitnami/kafka -e KAFKA_CFG_NODE_ID=0 -e KAFKA_CFG_PROCESS_ROLES=controller,broker -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT -e "KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@${ContainerName}:9093" -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER -e "KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:$HostPort" -d bitnami/kafka:$ImageTag
  • Table 4.7: Kafka (KRaft) Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
--name$ContainerNameA unique name. This name must be used in KAFKA_CFG_CONTROLLER_QUORUM_VOTERS.
-p-p ${HostPort}:9092Maps the Kafka broker port 9092 to the host for client applications.
-v-v ${VolumeName}:/bitnami/kafka(Required for Persistence) Maps the named volume to the Bitnami image’s data directory.
-e KAFKA_CFG_NODE_ID-e KAFKA_CFG_NODE_ID=0Sets a unique ID for this node. 0 is sufficient for a single-node setup.
-e KAFKA_CFG_PROCESS_ROLES-e...=controller,broker(KRaft Enablement) Tells this single node to act as both controller (replaces Zookeeper) and broker.
-e KAFKA_CFG_LISTENERS-e...=PLAINTEXT://:9092,...Defines the internal ports the broker listens on inside the container.
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP-e...=CONTROLLER:PLAINTEXT,...Defines the security protocol (or lack thereof) for each internal listener.
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS-e...=0@${ContainerName}:9093Defines the controller. The format is {id}@{host}:{port}. The host must match the container’s --name (kafka-dev).
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES-e...=CONTROLLERTells the broker which listener is the controller.
-e KAFKA_CFG_ADVERTISED_LISTENERS-e...=PLAINTEXT://localhost:$HostPort(Key for Host Access) This is the address that Kafka will advertise to external clients (i.e., applications on the host machine). This must be set to localhost:9092.
-d-dRuns the container in detached (background) mode.
Imagebitnami/kafka:$ImageTagThe Bitnami Kafka image, which has excellent, well-documented KRaft support.
  • How to Access & Verify (Kafka):
    1. Directly (via docker exec): The best way to verify is to create a topic using the Kafka CLI tools inside the container.PowerShell
# Kafka-Create-Topic.ps1
Param(
    [string]$ContainerName = "kafka-dev",
    [string]$TopicName = "my-test-topic"
)
docker exec -it $ContainerName kafka-topics.sh --bootstrap-server localhost:9092 --create --topic $TopicName

Verify Topic Creation:

PowerShell

    1. # Kafka-List-Topics.ps1 Param( [string]$ContainerName = "kafka-dev" ) docker exec -it $ContainerName kafka-topics.sh --bootstrap-server localhost:9092 --list (You should see my-test-topic printed).
    2. From Host (Application): Any Kafka client (Java, Python, Node.js) on the host machine should set its bootstrap.servers configuration to localhost:9092.

D. Application Runtimes

1. Node.js (node)

  • Service/Image: Node.js Development Environment
  • Image: node:22 (or any other required version, e.g., node:18, node:20-alpine)

This use case is different. The goal is to create a live-reloading development environment. We will mount our local source code into the container, but we will cleverly isolate the node_modules directory inside a named volume to avoid polluting the host and to maximize performance.

This “volume overlay” trick is the key to a productive Node.js + Docker workflow.

  1. -v ${PWD}:/app: This bind-mounts the local source code (index.js, package.json, etc.) into the container, allowing for live edits from the host.
  2. -v my_app_node_modules:/app/node_modules: This mounts a named volume on top of the node_modules subdirectory. This “hides” the node_modules folder from the host-code mount, forcing all dependencies to be installed only inside the Docker-managed volume.
  • PowerShell Scripts:PowerShell
# Create-NodeModulesVolume.ps1
Param(
    [string]$VolumeName = "my_app_node_modules"
)
docker volume create $VolumeName

PowerShell

  • # Start-Node-Dev-Shell.ps1 Param( [string]$ContainerName = "node-dev", [int]$HostPort = 3000, [string]$VolumeName = "my_app_node_modules", [string]$ImageTag = "22" ) # Mounts the current directory (${PWD}) and isolates node_modules in a volume. docker run -it --rm --name $ContainerName -p ${HostPort}:3000 -v ${PWD}:/app -v ${VolumeName}:/app/node_modules -w /app node:$ImageTag bash
  • Table 4.8: Node.js Dev Env Parameter Reference
ParameterPowerShell ExampleExplanation & Placeholder Notes
-it-itInteractive TTY. This is required to connect the PowerShell terminal to the container’s bash shell.
--rm--rm(Recommended) Automatically removes the container when the bash shell is exited. This is ideal for ephemeral dev sessions.
--name$ContainerNameA unique, friendly name for the session.
-p-p ${HostPort}:3000Maps the host port 3000 to the container’s port 3000, which is the common default for Node.js servers (e.g., Express).
-v ${PWD}:/app-v ${PWD}:/app(Host Code Mount) Maps the current PowerShell directory (represented by ${PWD}) to the /app directory inside the container.
-v ${VolumeName}...-v...:/app/node_modules(The Volume Trick) Mounts the named volume my_app_node_modules over the node_modules sub-directory, isolating dependencies.
-w /app-w /appSets the working directory inside the container to /app. When the bash shell starts, it will already be in this directory.
Imagenode:$ImageTagThe official Node.js base image.
CommandbashThe command to run instead of the default. This drops the user into an interactive bash shell.
  • How to Access & Verify (Node.js):
    1. The Start-Node-Dev-Shell.ps1 script is the access method. It will drop the user directly into a bash shell, inside the container, at the /app prompt.
    2. Inside the container shell: Run npm install. All packages will be downloaded and installed into the my_app_node_modules volume (they will not appear on the host).
    3. Inside the container shell: Run npm start (or npm run dev, if using nodemon). The Node.js server will start.
    4. On the host machine: Open a web browser to http://localhost:3000 to view the running application.
    5. Test Live Reload: On the host machine, open a source file (e.g., index.js) in a code editor (like VS Code) and make a change. The change is instantly reflected inside the container, and tools like nodemon will automatically restart the server.

V. Advanced Management and Best Practices

Understanding the docker run commands is the first step. Mastering the management of data and networks is what makes this workflow robust and sustainable.

A. Mastering Data: Volumes vs. Bind Mounts

This guide uses two different methods for data persistence, each for a specific purpose.

  • Named Volumes (e.g., postgres_data, my_app_node_modules) A named volume is a block of storage fully managed by the Docker engine, abstracted from the host filesystem.
    • Pros: This is the preferred method for all “service data.” It is platform-agnostic, its contents are protected from accidental host-user modification, and it is the most performant way for a container to write data.
    • Use Case: This is the strongly recommended method for all database data (PostgreSQL, MySQL, Mongo), message-broker data (RabbitMQ, Kafka), and any other data that the container owns and manages (like the node_modules trick).
  • Bind Mounts (e.g., ${PWD}:/app) A bind mount directly maps a specific directory or file from the host machine into the container.
    • Pros: Allows for real-time, bi-directional file synchronization. This is what enables the “live-reload” development workflow.
    • Cons: It is slower than a named volume due to I/O overhead. It is tied to a specific host path and can introduce file permission issues between the (Linux) container and the (Windows) host.
    • Use Case: This is the strongly recommended method for application source code that is being actively developed on the host machine.

B. Mastering Networking: The User-Defined Bridge

As demonstrated in the PostgreSQL/pgAdmin example, relying on localhost and -p port-publishing creates a brittle system. The robust, production-ready pattern is to use a user-defined bridge network.

  • The Problem (Recap): The “localhost” trap. localhost inside a container refers to that container and only that container.
  • The Solution:
    1. Create a dedicated network for an application stack (once):PowerShell
    1. # Create-Docker-Network.ps1 Param( [string]$NetworkName = "my-application-net" ) docker network create $NetworkName
    2. Add the --network my-application-net flag to every container in that stack (e.g., the Node.js app, the Postgres DB, and the Redis cache).
  • The Benefit: All containers on this network can now resolve each other by their container name. The Node.js application’s database connection string would no longer use localhost. It would use the name of the database container:
    • postgres://my_app_user:USER_PASSWORD@postgres-dev:5432/my_app_db

This service-discovery-by-name pattern is how modern, container-orchestrated systems (like Docker Compose and Kubernetes) function by default.

C. The “Scorched Earth” Cleanup (And the Persistent Data Problem)

To fully reset a local development environment, a set of cleanup commands is required.

PowerShell

# Cleanup-Stop-All-Containers.ps1
docker stop $(docker ps -q)

PowerShell

# Cleanup-Remove-All-Containers.ps1
docker rm $(docker ps -a -q)

PowerShell

# Cleanup-Prune-Networks.ps1
docker network prune

PowerShell

# Cleanup-Prune-Volumes.ps1
docker volume prune

A very common point of friction for developers is the “Password Won’t Change” problem. A user may stop and remove a Postgres container (docker rm pc1_postgres), re-run the exact docker run command but with a new -e POSTGRES_PASSWORD=new_pass, and find that they still can only log in with the old password (admin).

This is not a bug; it is a feature of persistence. The container’s entrypoint script is designed to run its initialization logic (like setting the superuser password) only if the data directory is empty. Because the pc1_postgres_data volume persisted from the first run (as intended), the script sees the existing database files and skips the initialization step. The -e flag is simply (and correctly) ignored on all subsequent runs.

The only way to force a container to re-initialize (e.g., to reset the password or run an init script) is to also delete its persistent volume:

PowerShell

# Reset-Postgres-Demo.ps1
Param(
    [string]$ContainerName = "pc1_postgres",
    [string]$VolumeName = "pc1_postgres_data"
)
docker stop $ContainerName
docker rm $ContainerName
docker volume rm $VolumeName

After these commands, the next docker run command will find an empty, new volume and will execute its full initialization logic, correctly applying the new password. Understanding this relationship between the container’s ephemeral lifecycle and the volume’s persistent lifecycle is the key to mastering local service management with Docker.