Architecture Overview
Let’s start with a complete picture of how Edera works, from Kubernetes all the way down to hardware.
The Complete Stack
┌─────────────────────────────────────────────────────────┐
│ Kubernetes │
│ (API Server, Scheduler, etc.) │
└────────────────────────┬────────────────────────────────┘
│
CRI Interface
│
▼
┌─────────────────────────────────────────────────────────┐
│ Edera Node │
│ │ │
│ ┌──────────────────────▼──────────────────────────┐ │
│ │ Protect │ │
│ └──────────────────────┬──────────────────────────┘ │
│ │ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Zone │ │ Zone │ │ Zone │ │ Zone │ │
│ │ ┌─────┐ │ │ ┌─────┐ │ │ ┌─────┐ │ │ ┌─────┐ │ │
│ │ │ Pod │ │ │ │ Pod │ │ │ │ Pod │ │ │ │ Pod │ │ │
│ │ ├─────┤ │ │ ├─────┤ │ │ ├─────┤ │ │ ├─────┤ │ │
│ │ │Kern │ │ │ │Kern │ │ │ │Kern │ │ │ │Kern │ │ │
│ │ └─────┘ │ │ └─────┘ │ │ └─────┘ │ │ └─────┘ │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Xen Hypervisor │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Hardware (CPU, Memory, Network, Storage, GPU) │ │
│ └─────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘Key Layers
Let’s break down each layer:
1. Kubernetes Layer
Standard Kubernetes control plane:
- API Server
- Scheduler
- Controller Manager
- etcd
Nothing changes here. Edera is completely transparent to Kubernetes.
2. CRI Interface
The Container Runtime Interface (CRI) is how Kubernetes talks to container runtimes.
Edera implements the CRI interface, making Edera look like any other container runtime to Kubernetes.
3. Styrolite (Container Runtime)
Styrolite is Edera’s CRI-compatible container runtime. It:
- Receives pod creation requests from kubelet
- Manages container images (pull, extract, cache)
- Translates container specifications to Zone configurations
- Communicates with Edera for VM lifecycle
- Reports pod status back to Kubernetes
Think of it as: The bridge between Kubernetes and Xen.
4. Edera (Xen Control Plane)
Protect is Edera’s control plane for Xen. It:
- Creates and destroys Zones
- Manages VM resources (CPU, memory, disk)
- Configures networking and storage
- Handles GPU allocation
- Provides APIs for VM management
Think of it as: The conductor orchestrating the hypervisor.
5. Xen Hypervisor
The Xen hypervisor:
- Enforces hardware isolation between VMs
- Schedules VM CPU time
- Manages memory allocation
- Mediates device access
- Provides the security boundary
Think of it as: The foundation everything is built on.
6. Hardware
Physical resources:
- CPU (with VT-x/AMD-V support)
- Memory (RAM)
- Network interfaces
- Storage devices
- GPUs (optional)
Data Flow: Pod Creation
Let’s trace what happens when you deploy a pod:
1. kubectl apply -f pod.yaml
│
▼
2. Kubernetes API Server
│ (Validates and stores pod spec)
▼
3. Kubernetes Scheduler
│ (Selects node for pod)
▼
4. Kubelet (on Edera node)
│ (Receives pod assignment)
▼
5. Styrolite (via CRI)
│ (Pulls container image)
│ (Translates to Zone spec)
▼
6. Edera
│ (Creates Zone via Xen)
│ (Configures resources)
▼
7. Xen Hypervisor
│ (Boots Zone)
│ (Enforces isolation)
▼
8. Zone boots
│ (Guest kernel starts)
▼
9. Container starts in Zone
│ (Application runs)
▼
10. Styrolite reports success to kubelet
│
▼
11. Pod status: RunningTotal time: ~1-2 seconds (comparable to traditional containers)
Control Plane vs Data Plane
It’s useful to think about Edera in terms of control and data planes:
Control Plane
Components: Styrolite, Protect Responsibility: Managing VM lifecycle, resource allocation, configuration
Kubernetes → Styrolite → Protect → Xen
↑ ↓ ↓ ↓
└───────────┴──────────┴──────┘
(Status reporting back)Data Plane
Components: Zones, networking, storage Responsibility: Running workloads, processing traffic, storing data
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Zone │◄──►│ Zone │◄──►│ Zone │
│ (Pod) │ │ (Pod) │ │ (Pod) │
└─────────┘ └─────────┘ └─────────┘
│ │ │
└──────────────┼──────────────┘
▼
Network / StorageComponent Communication
How do components talk to each other?
Styrolite ↔ Edera
- Protocol: gRPC
- Interface: Edera API
- Operations:
- Create/delete VM
- Query VM status
- Attach/detach devices
- Configure networking
Edera ↔ Xen
- Protocol: libxl (Xen library)
- Interface: Xen toolstack
- Operations:
- VM lifecycle (create, pause, resume, destroy)
- Resource allocation
- Device configuration
- Event monitoring
Kubelet ↔ Styrolite
- Protocol: gRPC (CRI)
- Interface: CRI v1
- Operations:
- RunPodSandbox
- CreateContainer
- StartContainer
- StopContainer
- RemovePodSandbox
- ListContainers
- ContainerStatus
State Management
Where is state stored?
Kubernetes State
- Location: etcd (Kubernetes control plane)
- Contents: Pod specs, desired state, cluster configuration
- Persistence: Kubernetes manages this
Protect State
- Location: Local node filesystem
- Contents:
- VM configurations
- Network state
- Container images
- Zone configurations
- Persistence: Survives node reboots (for configuration)
Runtime State
- Location: In-memory (Xen, Edera)
- Contents:
- Running VMs
- Active connections
- Resource allocations
- Persistence: Ephemeral, lost on node reboot
Failure Modes and Recovery
What happens when things fail?
Zone Crashes
If a Zone crashes:
- Xen detects the failure
- Protect receives event
- Styrolite notifies kubelet
- Kubernetes restarts the pod
- New Zone is created
Same as traditional containers!
Edera Crashes
If Edera (control plane) crashes:
- Running VMs continue running (data plane unaffected)
- Edera restarts
- Reconnects to existing VMs
- Resumes control plane operations
Graceful degradation.
Node Reboot
If the entire node reboots:
- All VMs are lost (ephemeral)
- Node comes back up
- Kubelet reconnects to Kubernetes
- Kubernetes reschedules pods
- New VMs are created
Same as traditional containers!
Resource Overhead
What’s the overhead of this architecture?
Per-Node Overhead
| Component | Memory | CPU | Disk |
|---|---|---|---|
| Xen | ~100 MB | ~1% | ~50 MB |
| Edera | ~50 MB | ~1% | ~20 MB |
| Styrolite | ~100 MB | ~2% | ~50 MB |
| Total | ~250 MB | ~4% | ~120 MB |
This is the baseline overhead per node, regardless of how many VMs you run.
Per-Zone Overhead
| Resource | Overhead |
|---|---|
| Memory | ~10 MB |
| CPU | ~5% |
| Disk | ~5 MB (kernel + init) |
| Boot time | ~800ms |
This scales linearly with the number of pods.
Example: 30 Pods
- Node overhead: 250 MB memory, 4% CPU
- Pod overhead: 30 × 10 MB = 300 MB memory, 30 × 5% = 150% CPU (spread across time)
- Total overhead: ~550 MB memory, ~4-10% CPU (depending on workload)
For most nodes, this is totally acceptable. Further memory ballooning work is being done to reduce this footprint even more.
Scalability
How does Edera scale?
Vertical Scaling (Per-Node)
A single node can run:
- 250+ Zones on a typical server (e.g., 64 cores, 256 GB RAM)
- Limited by resource availability, not architecture
Horizontal Scaling (Cluster)
- 1000s of nodes in a cluster
- Same scaling characteristics as traditional Kubernetes
- No centralized bottleneck
Performance at Scale
Edera’s architecture is designed for scale:
- No shared state between nodes
- Control plane is distributed
- Data plane is fully distributed
- Resource management is local
Security Architecture
How does the security model work?
Trust Boundaries
┌─────────────────────────────────────┐
│ Untrusted: Guest VM / Container │ ← Assume compromised
├─────────────────────────────────────┤
│ Trusted: Xen Hypervisor │ ← Must be secure
├─────────────────────────────────────┤
│ Trusted: Edera / Styrolite │ ← Control plane (Dom0)
├─────────────────────────────────────┤
│ Trusted: Hardware │ ← Foundation
└─────────────────────────────────────┘Key insight: Only the hypervisor and control plane are trusted. Guest VMs are assumed hostile.
Attack Surface
What could an attacker target?
Guest VM (assumed compromised)
- Isolation: Hypervisor prevents escape
Hypervisor (360K lines of code)
- Mitigations: Minimal code, hardware enforcement, regular pentests
Control Plane (Edera/Styrolite)
- Mitigations: Runs in Dom0 (privileged), limited API surface
Hardware
- Mitigations: Hardware security features (VT-x, IOMMU)
The attack surface is much smaller than traditional containers (27M lines of kernel).
Integration Points
Where does Edera integrate with existing systems?
Kubernetes Integration
- CRI: Styrolite implements standard CRI
- CNI: Standard Kubernetes network plugins work
- CSI: Standard storage plugins work
- Device Plugins: GPU and other devices supported
Observability Integration
- Metrics: Prometheus-compatible metrics
- Logs: Standard stdout/stderr collection
- Tracing: Distributed tracing support
- Events: Kubernetes events for lifecycle
Security Integration
- RBAC: Standard Kubernetes RBAC
- Network Policies: Enforced at hypervisor level
- Pod Security: Additional isolation layer
- Audit Logs: Full audit trail
Summary
Edera’s architecture is:
- Layered: Clean separation of concerns
- Standards-compliant: Works with existing Kubernetes
- Scalable: No architectural bottlenecks
- Secure: Minimal trusted computing base
- Performant: Acceptable overhead for strong isolation
The key insight: Insert a hypervisor layer between Kubernetes and containers without breaking existing workflows.
