Like many of you, the NGINX Unit team has been hunkered down at home during the COVID‑19 pandemic. Nonetheless, we’ve been able to maintain our steady release cadence, introducing versions 1.15.0 and 1.16.0 in the past couple months. Let’s take a quick look at the new features we’ve added.
Two of the features new with NGINX Unit 1.16.0 are familiar to fans of NGINX Open Source and NGINX Plus: the fallback
routing option is similar to the NGINX try_files
directive, and the upstreams
object introduces round‑robin load balancing of requests across a dedicated group of backend servers.
The fallback
Routing Option
The first new feature enables you to define an alternative routing action for cases when a static file can’t be served for some reason. You can easily imagine ways to extend this logic beyond mere file system misses, but we decided to make them the first use case we address. Therefore, our initial implementation of the fallback
action pairs with the share
action introduced in NGINX Unit 1.11.0. Let’s see how it’s done.
The new fallback
option defines what NGINX Unit does when it can’t serve a requested file from the share
directory for any reason. Consider the following action
object:
When this action is performed, NGINX Unit attempts to serve the request with a file from the /data/www/ directory. However, if (and only if) the requested file is unavailable (not found, insufficient rights, you name it), NGINX Unit performs the fallback
action. Here, it’s a pass
action that forwards the request to a PHP blog application, but you can configure a proxy
or even another share
as well (more on the latter below).
Effectively, this means that NGINX Unit serves existing static files and simultaneously forwards all other requests to the application, thus reducing the need for extra routing steps. Less configuration reduces the chance of a mistake.
Moreover, nothing prevents nesting multiple share
actions to create an elaborate request handler:
This snippet builds upon the previous one by adding yet another file location, /data/cache/, to the fallback
chain before the request is passed to the same application as in the previous snippet. Keep in mind that in this initial implementation fallback
can only accompany a share
; it can’t be specified as an alternative to pass
or proxy
.
The logic of this config option may seem simple, but it enables NGINX Unit to single‑handedly run many powerful applications that previously required an additional software layer. For example, we’ve already updated our how‑tos for WordPress, Bugzilla, and NextCloud to make use of the new option, significantly reducing configuration overhead.
Round-Robin Load Balancing with Upstreams
The other major feature we’re introducing is the upstreams
object. It resides within the config section as a peer of the listeners
, applications
, routes
, and settings
objects. In case you’re not familiar with NGINX, an upstream is an abstraction that groups several servers into a single logical entity to simplify management and monitoring. Typically, you can distribute workloads, assign different roles, and fine‑tune properties of individual servers within an upstream, yet it looks and acts like a single entity when viewed from outside.
In NGINX Unit, upstreams are configured as follows:
Like an application or a route, an upstream can be the target of a pass
action in both listeners
and routes
objects. The definition of each upstream includes its name (rr-lb
on line 9) and a mandatory servers
object, which contains a configuration object for each server in the named upstream, specifying the server’s IP address and port and optionally other characteristics. In the initial implementation, the only supported option is the integer‑valued weight
of the server, used for load balancing.
NGINX Unit automatically distributes requests between the servers in the upstream object in a round‑robin fashion according to their weights. In the example above, the second server receives approximately twice as many requests as the first one (from which you can deduce that the default weight is 1). Again, round robin is just one of many possible load‑balancing methods and will be joined by other methods in future.
With the introduction of upstreams, NGINX Unit greatly enhances its range of functionality: originally a solid standalone building block for managing your apps, it is steadily gaining momentum as a versatile, feature‑rich component of your overall app‑delivery infrastructure. You can use it as an application endpoint, a reverse proxy, a load balancer, a static cache server, or in any other way you may come up with.
Other Developments
NGINX Unit now has a shiny new roadmap where you can find out whether your favorite feature is going to be implemented any time soon, and rate and comment on our in‑house initiatives. Feel free to open new issues in our repo on GitHub and share your ideas for improvement. Perhaps one day they’ll turn up on our roadmap as well!
In case you are wondering what happened to our containerization initiative: it’s alive and well, thank you very much. In NGINX Unit 1.14.0, we introduced the ability to change user
and group
settings for isolated applications when the NGINX Unit daemon runs as an unprivileged user (remember that the recommended way is to run it as root
, though). Of course, this isn’t the end of it: there’s much more coming in the next few months.
What’s Next
Speaking of our plans: our extended team is working on several under-the-hood enhancements such as IPC or memory buffer management to make NGINX Unit a tad more robust. On the user‑facing side of things, our current endeavors include enhancements in load balancing, the ability to return custom HTTP response status codes during routing, and the rootfs
option which confines running apps within a designated file system directory. All in all, NGINX Unit remains a busy construction site, so don’t forget to bring a hard hat!