Kubernetes Operator
Expose Kubernetes Services through rstream custom resources.
The rstream Kubernetes operator lets a cluster expose Services through rstream without running tunnel commands inside application Pods. Platform teams install the operator once, then application teams declare RstreamConnection and RstreamTunnel resources next to their Deployments and Services.
Resource model
RstreamConnection is the shared connection object for a namespace. In hosted rstream, the preferred field is projectEndpoint; the operator resolves the current engine through the Control plane. Advanced staging or private Control plane environments can override apiURL, but hosted users normally leave it unset.
RstreamTunnel exposes one Kubernetes Service. The operator validates the Service port, writes a ConfigMap, creates a restricted agent Deployment, and keeps RstreamTunnel.status updated with readiness, target, hostname, and forwarding address. For interactive kubectl use, the CRD also supports the tunnel, rtun, and rtunnel aliases.
Install the operator
Install the Helm chart from the operator repository:
helm upgrade --install rstream-operator ./charts/rstream-operator \
--namespace rstream-system \
--create-namespace \
--set image.repository=rstream/rstream-operator \
--set image.tag=latestThe chart installs the CRDs, manager Deployment, ServiceAccount, and RBAC needed to watch RstreamConnection, RstreamTunnel, Services, Secrets, and managed agent resources.
For a complete first setup on a fresh k3s cluster, including every command from cluster creation to the final public URL, see Expose a k3s Service with the rstream Kubernetes Operator.
Minimal HTTP Service
Create a namespace, a simple HTTP Deployment, and a Service:
apiVersion: v1
kind: Namespace
metadata:
name: rstream-demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-server
namespace: rstream-demo
spec:
replicas: 1
selector:
matchLabels:
app: http-server
template:
metadata:
labels:
app: http-server
spec:
containers:
- name: http-server
image: python:3.12-alpine
command: ["python", "-m", "http.server", "8080", "--bind", "0.0.0.0"]
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: http-server
namespace: rstream-demo
spec:
selector:
app: http-server
ports:
- name: http
port: 8080
targetPort: 8080Store the rstream token in the same namespace as the connection:
kubectl -n rstream-demo create secret generic rstream-credentials \
--from-literal=token="$RSTREAM_TOKEN"Then declare the connection and tunnel:
apiVersion: tunnels.rstream.io/v1alpha1
kind: RstreamConnection
metadata:
name: default
namespace: rstream-demo
spec:
projectEndpoint: "<project-endpoint>"
tokenSecretRef:
name: rstream-credentials
key: token
---
apiVersion: tunnels.rstream.io/v1alpha1
kind: RstreamTunnel
metadata:
name: web
namespace: rstream-demo
spec:
connectionRef:
name: default
target:
service:
name: http-server
port: http
publish: true
protocol: http
http:
version: http/1.1Check the result from Kubernetes:
kubectl -n rstream-demo get rstreamconnection,rstreamtunnel
kubectl -n rstream-demo describe rstreamtunnel webWhen the tunnel is ready, RstreamTunnel.status.forwardingAddress contains the public address.
kubectl -n rstream-demo get rstreamtunnel web -o jsonpath='{.status.forwardingAddress}{"\n"}'Self-hosted engines
Self-hosted deployments do not need the Control plane. Set engine directly instead of projectEndpoint:
apiVersion: tunnels.rstream.io/v1alpha1
kind: RstreamConnection
metadata:
name: default
namespace: rstream-demo
spec:
engine: engine.internal.example.com:443
tokenSecretRef:
name: rstream-credentials
key: tokenThe rest of the RstreamTunnel definition stays the same.
Operational notes
Each RstreamTunnel gets its own agent Deployment. The manager handles Kubernetes reconciliation; the agent owns the data plane and patches TunnelReady when the rstream tunnel is online.
Secrets are not copied into ConfigMaps. Tokens are mounted into the agent through environment variables, and mTLS material is projected from Secrets when mTLS authentication is used.
Use named Service ports when possible. They make tunnel specs resilient to port number changes and make protocol mismatches easier to detect.