Skip to Content
GuidesFreestyle Integration

Freestyle Integration

This guide walks through running iron-proxy inside Freestyle  VMs. You will create a VM snapshot with iron-proxy pre-installed, then use that snapshot to launch ephemeral VMs with egress control already configured.

How It Works

The setup has two phases:

  1. Snapshot creation: A one-time script builds a Freestyle VM that installs iron-proxy, generates a CA certificate, trusts it system-wide, configures DNS to resolve through the proxy, and sets up iptables rules to force all traffic through the proxy. Freestyle snapshots the result.
  2. VM launch: New VMs boot from the snapshot with iron-proxy already running. DNS queries resolve through iron-proxy, which checks them against your allowlist and terminates TLS using per-domain leaf certificates. iptables rules prevent workloads from bypassing the proxy.

Because iron-proxy owns DNS inside the VM and iptables blocks direct outbound connections from non-root processes, all traffic flows through the proxy automatically. There is no need for per-process configuration or custom environment variables.

Prerequisites

  • A Freestyle  account with API access
  • Node.js 18+
  • A FREESTYLE_API_KEY environment variable set in your shell

Setup

Install the Freestyle SDK

npm install freestyle

Create the Snapshot Script

Save this as create-snapshot.mjs. It builds a Debian VM, installs iron-proxy, generates a trusted CA, and snapshots the result.

import { freestyle, VmSpec, VmBaseImage } from "freestyle"; const IRON_PROXY_VERSION = process.env.IRON_PROXY_VERSION || "latest"; // iron-proxy configuration const ironProxyConfig = ` dns: listen: ":53" proxy_ip: "127.0.0.1" upstream_resolver: "8.8.8.8:53" proxy: http_listen: ":80" https_listen: ":443" tls: ca_cert: "/etc/iron-proxy/ca.crt" ca_key: "/etc/iron-proxy/ca.key" transforms: - name: allowlist config: warn: false domains: - "api.github.com" - "github.com" - "objects.githubusercontent.com" - "httpbin.org" log: level: "info" `.trimStart(); // Oneshot script: download iron-proxy, generate CA, trust CA, stop systemd-resolved, // create unprivileged user, set up iptables const installScript = `#!/bin/bash set -euo pipefail # Resolve version VERSION="${IRON_PROXY_VERSION}" if [ "$VERSION" = "latest" ]; then VERSION=$(curl -fsSL https://api.github.com/repos/ironsh/iron-proxy/releases/latest | jq -r '.tag_name | ltrimstr("v")') fi echo "Installing iron-proxy v$VERSION" # Download and install binary curl -fsSL -o /tmp/iron-proxy.tgz \\ "https://github.com/ironsh/iron-proxy/releases/download/v\${VERSION}/iron-proxy_\${VERSION}_linux_amd64.tar.gz" tar -xzf /tmp/iron-proxy.tgz -C /tmp mv /tmp/iron-proxy /usr/local/bin/iron-proxy chmod +x /usr/local/bin/iron-proxy rm -f /tmp/iron-proxy.tgz # Generate CA for TLS interception mkdir -p /etc/iron-proxy openssl genrsa -out /etc/iron-proxy/ca.key 2048 2>/dev/null openssl req -x509 -new -nodes \\ -key /etc/iron-proxy/ca.key \\ -sha256 -days 365 \\ -subj "/CN=iron-proxy CA" \\ -addext "basicConstraints=critical,CA:TRUE" \\ -addext "keyUsage=critical,keyCertSign" \\ -out /etc/iron-proxy/ca.crt 2>/dev/null # Trust the CA system-wide cp /etc/iron-proxy/ca.crt /usr/local/share/ca-certificates/iron-proxy-ca.crt update-ca-certificates # Stop systemd-resolved to free port 53 systemctl stop systemd-resolved || true systemctl disable systemd-resolved || true # Route DNS through the proxy echo "nameserver 127.0.0.1" > /etc/resolv.conf # Create an unprivileged user for running workloads. # This is required: iptables rules below allow outbound traffic from root # (which iron-proxy runs as) and reject everything else. If workloads run # as root, they can bypass the proxy entirely. useradd -m -s /bin/bash workload # Set up iptables rules to force traffic through the proxy. # - Allow all loopback traffic (needed for proxy communication). # - Allow outbound traffic from root (iron-proxy runs as root). # - Allow established/related connections (return traffic for accepted connections). # - Reject everything else: non-root processes cannot reach the network directly. iptables -A OUTPUT -o lo -j ACCEPT iptables -A OUTPUT -m owner --uid-owner root -j ACCEPT iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -j REJECT --reject-with icmp-port-unreachable `; async function main() { const baseImage = new VmBaseImage("FROM debian:trixie-slim").runCommands( "apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y curl ca-certificates openssl jq iptables sudo" ); const spec = new VmSpec() .baseImage(baseImage) .additionalFiles({ "/etc/iron-proxy/config.yaml": { content: ironProxyConfig }, "/usr/local/bin/install-iron-proxy.sh": { content: installScript }, }) .systemdService({ name: "install-iron-proxy", mode: "oneshot", exec: ["bash /usr/local/bin/install-iron-proxy.sh"], wantedBy: ["multi-user.target"], remainAfterExit: true, timeoutSec: 120, }) .systemdService({ name: "iron-proxy", mode: "service", exec: ["/usr/local/bin/iron-proxy -config /etc/iron-proxy/config.yaml"], after: ["install-iron-proxy.service"], requires: ["install-iron-proxy.service"], restartPolicy: { policy: "on-failure", restartSec: 5, }, }); console.log("Creating iron-proxy VM and snapshotting..."); const { vm, snapshotId } = await freestyle.vms.create({ snapshot: spec, persistence: { type: "ephemeral" }, }); // Verify everything is running const status = await vm.exec("systemctl status iron-proxy --no-pager"); console.log("\niron-proxy service status:"); console.log(status.stdout); const resolv = await vm.exec("cat /etc/resolv.conf"); console.log("DNS configuration:"); console.log(resolv.stdout); await vm.stop(); console.log(`\nSnapshot ID: ${snapshotId}`); console.log( "Done. Use this snapshotId to create ephemeral VMs with iron-proxy pre-configured." ); } main().catch((err) => { console.error(err); process.exit(1); });

A few things to note about this script:

  • dns.proxy_ip is set to 127.0.0.1 because iron-proxy runs inside the same VM as your workloads.
  • dns.upstream_resolver uses 8.8.8.8:53. Change this if your network requires a different upstream resolver.
  • warn: false means non-allowlisted requests are blocked immediately. Set to true while building your allowlist.
  • domains includes GitHub (needed for the install script to download iron-proxy) and httpbin.org for testing. Replace these with the domains your workloads need before creating the snapshot.
  • The install script disables systemd-resolved and points /etc/resolv.conf at 127.0.0.1 so all DNS goes through iron-proxy.
  • The CA is trusted system-wide via update-ca-certificates.
  • A workload user is created for running untrusted code. This is required because the iptables rules allow outbound traffic from root. If workloads run as root, they can bypass the proxy.
  • The iptables rules allow loopback and root-owned traffic, then reject everything else. This forces all non-root outbound traffic through the proxy.

Bootstrap the Snapshot

Run the script to create your snapshot. This requires a FREESTYLE_API_KEY in your environment:

export FREESTYLE_API_KEY="your-api-key" node create-snapshot.mjs

The first run will take longer than usual because Freestyle needs to build the base image from scratch. You should expect to see output like this:

Creating iron-proxy VM and snapshotting... VM creation is taking longer than expected. This usually happens when there's a cache miss on your vm's base snapshot. Subsequent vm creations with this configuration will likely be much faster. iron-proxy service status: ● iron-proxy.service - iron-proxy Loaded: loaded (/etc/systemd/system/iron-proxy.service; enabled; preset: enabled) Active: active (running) since Wed 2026-04-08 19:50:52 UTC; 4s ago Invocation: 54e29589b0bd4aaa827f7bd0ef8370d2 Main PID: 1984 (iron-proxy) Tasks: 10 (limit: 9551) Memory: 3M (peak: 3.6M) CPU: 51ms CGroup: /system.slice/iron-proxy.service └─1984 /usr/local/bin/iron-proxy -config /etc/iron-proxy/config.yaml ... DNS configuration: nameserver 127.0.0.1 Snapshot ID: sc-gogdcl41ilq3jabxytjn Done. Use this snapshotId to create ephemeral VMs with iron-proxy pre-configured.

Save the snapshot ID. You will use it to launch new VMs.

Launch VMs From the Snapshot

Use the snapshot ID to create ephemeral VMs with iron-proxy already running:

import { freestyle } from "freestyle"; const { vm } = await freestyle.vms.create({ snapshotId: "sc-abc123...", persistence: { type: "ephemeral" }, }); // Run commands as the unprivileged workload user const result = await vm.exec("sudo -u workload curl -s https://httpbin.org/get"); console.log(result.stdout); await vm.stop();

Verify

Check that iron-proxy is intercepting traffic inside a running VM:

# Inside the VM (via vm.exec or SSH) systemctl status iron-proxy

You should see the service active and running. Make a test request to confirm:

curl -sv https://httpbin.org/get 2>&1 | grep "issuer"

If iron-proxy is working, the TLS certificate issuer will be iron-proxy CA rather than the real upstream issuer.

Customizing the Allowlist

Edit the domains array in the ironProxyConfig string before creating the snapshot:

transforms: - name: allowlist config: warn: false domains: - "registry.npmjs.org" - "pypi.org" - "api.github.com"

To update an existing deployment, re-run create-snapshot.mjs to produce a new snapshot ID, then update your application to use the new ID.

For the full set of configuration options, see the configuration reference.

Egress Control With iptables

The snapshot script sets up iptables rules that force all outbound traffic through iron-proxy. This uses the xt_owner kernel module, which provides the --uid-owner match: it allows traffic from root (which iron-proxy runs as) while rejecting everything else. The rules are:

# Allow all loopback traffic (workloads talk to iron-proxy on 127.0.0.1) iptables -A OUTPUT -o lo -j ACCEPT # Allow outbound traffic from root (iron-proxy runs as root) iptables -A OUTPUT -m owner --uid-owner root -j ACCEPT # Allow return traffic for established connections iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Reject everything else iptables -A OUTPUT -j REJECT --reject-with icmp-port-unreachable

These rules mean that non-root processes cannot reach the network directly. Even if a workload hardcodes an IP address or uses its own DNS resolver, the traffic is rejected at the kernel level.

Running Workloads as an Unprivileged User

The iptables rules allow all traffic from root, so workloads must run as a non-root user to prevent circumvention. The snapshot script creates a workload user for this purpose. Run all untrusted code as this user:

// Run commands as the unprivileged workload user const result = await vm.exec("sudo -u workload your-command-here");

If a workload runs as root, it can bypass the proxy entirely: its traffic matches the --uid-owner root rule and goes straight to the internet. Always ensure untrusted code runs under the workload user or another non-root account.

Freestyle’s Built-In Egress Control

If you use Freestyle’s serverless runs , you can use Freestyle’s built-in network permissions instead of iron-proxy. Serverless runs support allow and deny rules that restrict which domains the run can access at the platform level. When an allow rule is specified, only whitelisted domains are accessible and all other requests are blocked. This is a simpler alternative if you do not need iron-proxy’s TLS interception, logging, or transform features.

Trusting the CA

The snapshot script trusts the CA system-wide and sets common runtime environment variables. Most tools will work without additional configuration. If you run into TLS errors, see the CA certificate reference for per-runtime details.

Troubleshooting

iron-proxy service is not running

Check the service logs:

journalctl -u iron-proxy --no-pager -n 50

If the install oneshot failed, check that too:

journalctl -u install-iron-proxy --no-pager -n 50

Common causes: network issues during the iron-proxy binary download, or a version string that does not match a GitHub release.

DNS resolution fails

Verify that /etc/resolv.conf points at 127.0.0.1:

cat /etc/resolv.conf

If it was overwritten (e.g., by DHCP), the snapshot’s systemd-resolved disable may not have taken effect. Re-run the snapshot creation.

TLS certificate errors

If you see certificate signed by unknown authority, the CA is not trusted by the runtime making the request. Check that update-ca-certificates ran successfully during snapshot creation, and ensure the appropriate environment variable is set for your runtime. See CA Certificates for details.

Last updated on