Embed Env Vars in Files on a Nginx Docker Image with envsubst

2024-08-02

Embed Env Vars in Files on a Nginx Docker Image with envsubst

In this post, we’re gonna focus learning how to configure your Docker Image to support dynamic environment vars for your ngnix web server. This allows for more flexible front-end deployment.

But before jumping into it, let’s have a small introduction about nginx itself and when you’d want to use and when not, and why Nginx should be used for front-end delivery.

What is Nginx?

ngnix is a popular open-source web server software, It can also function as a reverse proxy, load balancer, mail proxy, and HTTP cache.

When you should use it?

ngnix is a versatile web server that excels in several scenarios:

  1. High-traffic websites: ngnix efficiently handles numerous concurrent connections, making it ideal for busy sites.
  2. Serving static content: It quickly delivers static files like images, CSS, and JavaScript.
  3. Load balancing: ngnix effectively distributes traffic across multiple servers, improving reliability and performance.
  4. Reverse proxy: It can sit in front of application servers, adding features like caching and SSL termination.
  5. API gateway: ngnix works well for routing requests to different microservices in a distributed architecture.
  6. Performance optimization: With its caching capabilities, Nginx can significantly boost website speed.
  7. Enhanced security: Features like rate limiting help protect against certain types of attacks.

When you should not avoid using it?

While ngnix is a powerful and versatile web server, there are scenarios where it might not be the best choice:

  1. Simple, low-traffic websites: For basic websites with low traffic, ngnix’s advanced features may be overkill. A simpler server like Apache might be easier to set up and manage.
  2. Windows-based environments: Although ngnix can run on Windows, it's primarily designed for Unix-like systems. In Windows-centric environments, IIS (Internet Information Services) might be a more natural fit.
  3. When extensive .htaccess support is needed: Unlike Apache, ngnix doesn't support .htaccess files. If your application heavily relies on .htaccess for configuration, switching to ngnix could require significant changes.
  4. Applications requiring deep integration with the web server: Some applications are built to work closely with specific web servers. If your application is tightly coupled with another web server's architecture, migrating to ngnix could be challenging.
  5. When real-time communication is a primary requirement: While ngnix can handle WebSockets, for applications primarily focused on real-time, bidirectional communication, specialized solutions like Node.js with Socket.IO might be more appropriate.
  6. Limited in-house expertise: If your team is more familiar with other web servers, the learning curve for ngnix might not be worth it for simpler projects.
  7. When extensive GUI-based administration is required: ngnix lacks a built-in GUI for administration. For environments where non-technical staff need to manage the web server, solutions with comprehensive graphical interfaces might be preferable.

Why Nginx should be used for front-end delivery

Until now, Re:Earth has been deploying the front end by placing the source code in a Google Cloud Storage (GCS) bucket and delivering it via a CDN.

However, this method has led to the following issues:

  • Since the front end is deployed using gsutil rsync, if the deployment is interrupted for any reason, the files in the bucket may be left in an incomplete state. Moreover, if users access the site during deployment, there is a possibility that files may be delivered when not all files are correctly deployed.
  • It is difficult to quickly roll back when bugs are discovered in the front end after deploying a new version.
  • If you try to build a mechanism on GCS that allows for easy rollback, the CI/CD workflow and scripts become complicated.

Therefore, we plan to switch to delivering front-end files using Nginx and deploying its Docker image on Cloud Run.

This method provides the following benefits:

  • The CI/CD workflow becomes simpler.
  • With Cloud Run's features, rolling back the front end and changing traffic can be done easily. There will be no incomplete states, and revisions can be switched without downtime.
  • By introducing Cloud Deploy, it becomes possible to achieve canary releases, including the front end.

This approach has the drawback of not being able to leverage the scalability benefits of GCS for front-end delivery, but we believe the advantages outweigh this.

However, to realize this, the Nginx image needs to have the capability to customize certain settings through environment variables. Without this, Docker builds would be necessary every time a single setting to be changed.

Let's build a Simple ngnix Docker Image

To create a basic ngnix Docker image, we'll use the official ngnix image as our starting point. Here's how to set it up:

  1. Create a new directory for your project:
mkdir nginx-docker
cd nginx-docker
  1. Create a Dockerfile:
touch Dockerfile
  1. Open the Dockerfile in your preferred text editor and add the following contenxt:
FROM nginx:alpine

COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./html /usr/share/nginx/html

The Dockerfile does the following:

  • Uses the official ngnix alpine image as the base
  • Copies a custom nginx.conf file into the container
  • Copies your HTML files into the default ngnix web root
  1. Create a basic ngnix.conf file
touch nginx.conf

Add this simple configuration:

events {
    worker_connections 1024;
}

http {
    server {
        listen 80;
        server_name localhost;

        location / {
            root /usr/share/nginx/html;
            index index.html;
        }
    }
}
  1. Create an HTML directory and add an index.html file:
mkdir html
echo "<h1>Hello from Nginx Docker!</h1>" > html/index.html
  1. Build your Docker image
docker build -t my-nginx-image .
  1. Run your container:
docker run -d -p 8080:80 my-nginx-image

How to Configure Your Env Vars for Docker Image?

To use environment variables in your ngnix Docker image, we'll employ the envsubst utility. This approach allows for dynamic configuration based on environment variables passed to the container at runtime. Here's how to set it up:

  1. Create a template for your ngnix configuration:

    Rename your nginx.conf to nginx.conf.template and modify it to use environment variables:

    events {
        worker_connections 1024;
    }
    
    http {
        server {
            listen ${PORT};
            server_name ${SERVER_NAME};
    
            location / {
                root /usr/share/nginx/html;
                index index.html;
            }
        }
    }
    
  2. Create a startup script

    Create a file named docker-entrypoint.sh:

    #!/bin/sh
    envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
    exec nginx -g 'daemon off;'
    

    This script replaces the environment variables in the template and starts Nginx.

  3. Update your Dockerfile:

    Modify your Dockerfile to use the template and startup script:

    FROM nginx:alpine
    
    COPY nginx.conf.template /etc/nginx/nginx.conf.template
    COPY docker-entrypoint.sh /
    COPY ./html /usr/share/nginx/html
    
    RUN chmod +x /docker-entrypoint.sh
    
    CMD ["/docker-entrypoint.sh"]
    
  4. Build your updated Docker Image:

    docker build -t my-nginx-env-image
    
  5. Run your container with environment variable

    docker run -d -p 8080:80 -e PORT=80 -e SERVER_NAME=example.com my-nginx-env-image
    

Now, your ngnix configuration will use the values provided in the environment variables. This setup allows for greater flexibility, as you can change the configuration without rebuilding the image.

You can adjust other Nginx settings using this method as well. Just add more variables to your template and pass them when running the container. Furthermore, any file can be generated in addition to nginx.conf by envsubst.

This approach is particularly useful in containerized environments where you want to keep your images as generic and reusable as possible, with specific configurations provided at runtime.

Conclusion

In this post, we've explored how to configure a Docker image for ngnix with support for dynamic environment variables. Here's a summary of what we've covered:

  1. We started with an overview of ngnix, discussing its strengths and potential use cases.
  2. We built a simple ngnix Docker image, demonstrating how to create a basic setup for serving static content.
  3. We then enhanced our Docker image to support dynamic configuration using environment variables, employing the envsubst utility.

By implementing this setup, you're well-equipped to deploy ngnix in various scenarios, from development environments to production systems, with ease and efficiency.

Remember, while this method is powerful, it's important to manage your environment variables securely, especially in production environments. Consider using Docker secrets or other secure methods for handling sensitive information.

The combination of ngnix’s performance and Docker's flexibility creates a robust foundation for many web serving needs. Keep experimenting and adapting these techniques to best suit your specific use cases.

Happy Dockerization!

References

English

Eukaryaでは様々な職種で積極的にエンジニア採用を行っています!OSSにコントリビュートしていただける皆様からの応募をお待ちしております!

Eukarya 採用ページ

Eukarya is hiring for various positions! We are looking forward to your application from everyone who can contribute to OSS!

Eukarya Careers

Eukaryaは、Re:Earthと呼ばれるWebGISのSaaSの開発運営・研究開発を行っています。Web上で3Dを含むGIS(地図アプリの公開、データ管理、データ変換等)に関するあらゆる業務を完結できることを目指しています。ソースコードはほとんどOSSとしてGitHubで公開されています。

Eukarya Webサイト / ➔ note / ➔ GitHub

Eukarya is developing and operating a WebGIS SaaS called Re:Earth. We aim to complete all GIS-related tasks including 3D (such as publishing map applications, data management, and data conversion) on the web. Most of the source code is published on GitHub as OSS.

Eukarya Official Page / ➔ Medium / ➔ GitHub