Maintaining An OSTree Remote With NGINX

Previously I showed you how to build and maintain a Fedora IoT Remix, use RPM-OSTree-Engine to ease the process and to automate our builds with Gitlab CI/CD. This time I want to go into details on how to properly distribute the RPM-OSTree repository to potential clients like my many many Raspberry Pis (like three or so). Or of course my companies clients.

I’m using NGINX as webserver to serve the repository with a configuration targeted at distributing files, with minor adjustments for the RPM-OSTree use-case. The base configuration file is taken straight from the NGINX docs on using it as a file server. Checkout the whole configuration file at my personal infrastructure repository.

Besides many optimizations and security precautions the interesting settings for RPM-OSTree are centred around caching, file size and timeouts.

The timeout settings control how long (parts of) transactions can last before they are aborted. Especially on slower network connections those have to be tweaked with higher values to allow clients to take their time downloading updates.

http {
	## Timeouts, do not keep connections open longer then necessary to reduce
	# resource usage and deny Slowloris type attacks.
	client_body_timeout      20s; # maximum time between packets the client can pause when sending nginx any data
	client_header_timeout    20s; # maximum time the client has to send the entire header to nginx
	keepalive_timeout        160s; # timeout which a single keep-alive client connection will stay open
  # Needs to be more than 60s according to tests in low bandwith scenarios
	send_timeout             120s; # maximum time between packets nginx is allowed to pause when sending the client data

Then the file caching options are interesting. Pay special attention to open_file_cache_valid <time> since this effectively controls how often NGINX refreshes your repository state. Say you’ve pushed a commit to your repository without a signature, a client downloads your repos metadata and complains about a commit without signature. You then apply the signature on the server. It’ll take <time> for NGINX to actually serve the new metadata with the signature applied. Usual caching options for file servers of e.g. 1 or 2 hours don’t work well with this kind of source. The NGINX docs have more to say about caching of course. There is also a useful guide to caching with NGINX at their blog.

http {
	open_file_cache           max=1000 inactive=5m;
	open_file_cache_errors    on;
	open_file_cache_min_uses  1;
	open_file_cache_valid     2m;

The tweaks are inspired from a bachelor thesis of a former colleague of mine. Recommended read going more in-depth on some aspects of the whole Fedora IoT as a Firmware question as well as the distribution of RPM-OSTree repositories and the comparison to similar technologies in terms of bandwidth usage, update methods, etc.

Although I’m using this config for both my hobby projects as well as production deployment at work it’s not said that there is nothing more to optimize. I’m pretty sure there is. If you find something interesting don’t hesitate to file a Merge Request or reach out on Mastodon ;)

Deploying OSTree and NGINX with Ansible

I’ve published a dedicated role for managing an RPM-OSTree repository in my central infrastructure git repository. The role sets up an nginx container respective suitable configuration as well as takes care of providing an SSH user for hooking the repository up with CI/CD done by RPM-OSTree-Engine.

The role relies on the public dev-sec.ssh-hardening role / collection available on Ansible galaxy. It also depends on the traefik role from the same repository, especially when using authentication methods like BasicAuth or mTLS.

Any thoughts of your own?

Feel free to raise a discussion with me on Mastodon or drop me an email.


The text of this post is licensed under the Attribution 4.0 International License (CC BY 4.0). You may Share or Adapt given the appropriate Credit.

Any source code in this post is licensed under the MIT license.