Embed Env Vars in Files on a Nginx Docker Image with envsubst

2024-08-02

Embed Env Vars in Files on a Nginx Docker Image with envsubst

In this post, we’re gonna focus learning how to configure your Docker Image to support dynamic environment vars for your Ngnix web server. This allows for more flexible front-end deployment.

But before jumping into it, let’s have a small introduction about Nginx itself and when you’d want to use and when not, and why Nginx should be used for front-end delivery.

2025/07/15: The official Nginx Docker image is being updated daily, making it easier to embed environment variables, so I've updated this article.

What is Nginx?

Ngnix is a popular open-source web server software, It can also function as a reverse proxy, load balancer, mail proxy, and HTTP cache.

When you should use it?

Nginx is a versatile web server that excels in several scenarios:

  1. High-traffic websites: It efficiently handles numerous concurrent connections, making it ideal for busy sites.
  2. Serving static content: It quickly delivers static files like images, CSS, and JavaScript.
  3. Load balancing: It effectively distributes traffic across multiple servers, improving reliability and performance.
  4. Reverse proxy: It can sit in front of application servers, adding features like caching and SSL termination.
  5. API gateway: It works well for routing requests to different microservices in a distributed architecture.
  6. Performance optimization: With its caching capabilities, Nginx can significantly boost website speed.
  7. Enhanced security: Features like rate limiting help protect against certain types of attacks.

When you should not avoid using it?

While Nginx is a powerful and versatile web server, there are scenarios where it might not be the best choice:

  1. Simple, low-traffic websites: For basic websites with low traffic, ngnix’s advanced features may be overkill. A simpler server like Apache might be easier to set up and manage.
  2. Windows-based environments: Although ngnix can run on Windows, it's primarily designed for Unix-like systems. In Windows-centric environments, IIS (Internet Information Services) might be a more natural fit.
  3. When extensive .htaccess support is needed: Unlike Apache, ngnix doesn't support .htaccess files. If your application heavily relies on .htaccess for configuration, switching to ngnix could require significant changes.
  4. Applications requiring deep integration with the web server: Some applications are built to work closely with specific web servers. If your application is tightly coupled with another web server's architecture, migrating to ngnix could be challenging.
  5. When real-time communication is a primary requirement: While ngnix can handle WebSockets, for applications primarily focused on real-time, bidirectional communication, specialized solutions like Node.js with Socket.IO might be more appropriate.
  6. Limited in-house expertise: If your team is more familiar with other web servers, the learning curve for ngnix might not be worth it for simpler projects.
  7. When extensive GUI-based administration is required: ngnix lacks a built-in GUI for administration. For environments where non-technical staff need to manage the web server, solutions with comprehensive graphical interfaces might be preferable.

Why Nginx should be used for front-end delivery

Until now, Re:Earth has been deploying the front end by placing the source code in a Google Cloud Storage (GCS) bucket and delivering it via a CDN.

However, this method has led to the following issues:

  • Since the front end is deployed using gsutil rsync, if the deployment is interrupted for any reason, the files in the bucket may be left in an incomplete state. Moreover, if users access the site during deployment, there is a possibility that files may be delivered when not all files are correctly deployed.
  • It is difficult to quickly roll back when bugs are discovered in the front end after deploying a new version.
  • If you try to build a mechanism on GCS that allows for easy rollback, the CI/CD workflow and scripts become complicated.

Therefore, we have switched to delivering front-end files using Nginx and deploying its Docker image on Cloud Run.

This method provides the following benefits:

  • The CI/CD workflow becomes simpler.
  • With Cloud Run's features, rolling back the front end and changing traffic can be done easily. There will be no incomplete states, and revisions can be switched without downtime.
  • By introducing Cloud Deploy, it becomes possible to achieve canary releases, including the front end.

This approach has the drawback of not being able to leverage the scalability benefits of GCS for front-end delivery, but we believe the advantages outweigh this.

However, to realize this, the Nginx image needs to have the capability to customize certain settings through environment variables. Without this, Docker builds would be necessary every time a single setting to be changed.

Let's build a Simple Nginx Docker Image

To create a basic Nginx Docker image, we'll use the official ngnix image as our starting point. Here's how to set it up:

  1. Create a new directory for your project:
mkdir nginx-docker
cd nginx-docker
  1. Create a Dockerfile:
touch Dockerfile
  1. Open the Dockerfile in your preferred text editor and add the following context:
FROM nginx:alpine

COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY html /usr/share/nginx/html

The Dockerfile does the following:

  • Uses the official Ngnix lpine image as the base
  • Copies a custom nginx.conf file into the container
  • Copies your HTML files into the default Ngnix web root
  1. Create a basic ngnix.conf file
touch nginx.conf

Add this simple configuration:

server {
    listen 80;
    server_name localhost;

    location / {
        root /usr/share/nginx/html;
        index index.html;
    }
}

When placing configuration files in /etc/nginx/conf.d/*.conf instead of /etc/nginx/nginx.conf, the http block is not needed in the configuration file. This is because the configuration file will be embedded within the http block of the default nginx.conf.

  1. Create an HTML directory and add an index.html file:
mkdir html
echo "<h1>Hello from Nginx Docker!</h1>" > html/index.html
  1. Build your Docker image
docker build -t my-nginx-image .
  1. Run your container:
docker run -d -p 8080:80 my-nginx-image

How to Configure Your Env Vars for Docker Image?

The Nginx image has a feature that automatically processes files in /etc/nginx/templates/*.templates when the container starts and places them in /etc/nginx/conf.d/*. Using this, you can dynamically embed environment variables into Nginx configuration files when the container launches.

  1. Create a template for your Ngnix configuration:

    Rename your nginx.conf to nginx.conf.template and modify it to use environment variables:

    events {
        worker_connections 1024;
    }
    
    http {
        server {
            listen ${PORT};
            server_name ${SERVER_NAME};
    
            location / {
                root /usr/share/nginx/html;
                index index.html;
            }
        }
    }
    
  2. Update your Dockerfile:

    Modify your Dockerfile to use the template and startup script:

    FROM nginx:alpine
    
    COPY nginx.conf.template /etc/nginx/nginx.conf.template
    COPY docker-entrypoint.sh /
    COPY ./html /usr/share/nginx/html
    
    RUN chmod +x /docker-entrypoint.sh
    
    CMD ["/docker-entrypoint.sh"]
    
  3. Build your updated Docker Image:

    docker build -t my-nginx-env-image
    
  4. Run your container with environment variable

    docker run -d -p 8080:80 -e PORT=80 -e SERVER_NAME=example.com my-nginx-env-image
    

Now, your ngnix configuration will use the values provided in the environment variables. This setup allows for greater flexibility, as you can change the configuration without rebuilding the image.

You can adjust other Nginx settings using this method as well. Just add more variables to your template and pass them when running the container. Furthermore, any file can be generated in addition to nginx.conf by envsubst.

This approach is particularly useful in containerized environments where you want to keep your images as generic and reusable as possible, with specific configurations provided at runtime.

Why Environment Variables Are Automatically Embedded in Configuration Files

You might be wondering why environment variables are embedded in configuration files with just this setup.

In fact, the Nginx image has /docker-entrypoint.sh set as its ENTRYPOINT. This means that this script is executed when the container starts up.

This script includes functionality to execute /docker-entrypoint.d/*.sh in sequence, and then start Nginx (reference).

Additionally, the container includes the following scripts in /docker-entrypoint.d by default:

  • 10-listen-on-ipv6-by-default.sh: Script to enable IPv6 support (reference)
  • 15-local-resolvers.envsh: Script to set local DNS resolvers as environment variables (reference)
  • 20-envsubst-on-templates.sh: Script to replace environment variables in template files (reference)
  • 30-tune-worker-processes.sh: Script to automatically adjust the number of Nginx worker processes (reference)

Among these, when 20-envsubst-on-templates.sh is executed, the envsubst utility is called internally, and /etc/nginx/templates/*.template files are placed in /etc/nginx/conf.d/* with environment variables embedded.

Therefore, if you simply want to embed environment variables in Nginx configuration files, you don't need to create your own shell scripts and place them in the image. You just need to place files in /etc/nginx/templates/*.template.

However, since /etc/nginx/conf.d/nginx.conf is already in use, it's safer not to create /etc/nginx/templates/nginx.conf.template to prevent unintended overwrites.

Running Custom Scripts Before Startup

When starting Nginx, you might want to execute various processes beyond just embedding environment variables into configuration files. This is also straightforward.

For example, let's consider embedding environment variables into a prepared JSON file template and placing it in the /usr/share/nginx/html directory.

First, create a config.json.template file as follows:

{ "apiUrl": "${API_URL}" }

Next, create a 04-envsubst-config.sh shell script (don't forget to add execution permissions). The content of the shell script is as follows:

#!/bin/sh
envsubst < /opt/config.json.template > /usr/share/nginx/html/config.json

To use environment variables in the Nginx image, use the envsubst utility. This tool is already included in the Nginx image.

Next, copy this shell script to the /docker-entrypoint.d directory in your Dockerfile:

FROM nginx:alpine

COPY nginx.conf.template /etc/nginx/templates/default.conf.template
COPY config.json.template /opt/config.json.template
COPY 04-envsubst-config.sh /docker-entrypoint.d/04-envsubst-config.sh
COPY html /usr/share/nginx/html

This completes the setup. Build the Docker image, start the container with the -p API_URL=https://example.com option, and access http://localhost:8080/config.json to view JSON like this:

{ "api_url": "https://example.com" }

This is useful when the frontend needs to read configurations via environment variables.

You might have already noticed why we used the number 04- at the beginning of the shell script filename. Since /docker-entrypoint.sh executes /docker-entrypoint.d/*.sh in dictionary order, this ensures it runs after the default scripts.

In this example, if the API_URL environment variable is not set, no error occurs and ${API_URL} will be output directly in the JSON. When using in production environments, you might want to implement validation and error handling in the shell script.

Conclusion

In this post, we've explored how to configure a Docker image for ngnix with support for dynamic environment variables. Here's a summary of what we've covered:

  1. We started with an overview of ngnix, discussing its strengths and potential use cases.
  2. We built a simple ngnix Docker image, demonstrating how to create a basic setup for serving static content.
  3. We then enhanced our Docker image to support dynamic configuration using environment variables
  4. Finally, we learned how to run custom scripts before startup.

By implementing this setup, you're well-equipped to deploy ngnix in various scenarios, from development environments to production systems, with ease and efficiency.

Remember, while this method is powerful, it's important to manage your environment variables securely, especially in production environments. Consider using Docker secrets or other secure methods for handling sensitive information.

The combination of ngnix’s performance and Docker's flexibility creates a robust foundation for many web serving needs. Keep experimenting and adapting these techniques to best suit your specific use cases.

Happy Dockerization!

References

English

Eukaryaでは様々な職種で積極的にエンジニア採用を行っています!OSSにコントリビュートしていただける皆様からの応募をお待ちしております!

Eukarya 採用ページ

Eukarya is hiring for various positions! We are looking forward to your application from everyone who can contribute to OSS!

Eukarya Careers

Eukaryaは、Re:Earthと呼ばれるWebGISのSaaSの開発運営・研究開発を行っています。Web上で3Dを含むGIS(地図アプリの公開、データ管理、データ変換等)に関するあらゆる業務を完結できることを目指しています。ソースコードはほとんどOSSとしてGitHubで公開されています。

Eukarya Webサイト / ➔ note / ➔ GitHub

Eukarya is developing and operating a WebGIS SaaS called Re:Earth. We aim to complete all GIS-related tasks including 3D (such as publishing map applications, data management, and data conversion) on the web. Most of the source code is published on GitHub as OSS.

Eukarya Official Page / ➔ Medium / ➔ GitHub