Production Best Practices
You’ve installed Edera. Now let’s make sure it’s production-ready.
Pre-Deployment Planning
Maintenance Windows
Plan for Node Reboots
- Edera installation requires node reboots to load the hypervisor
- Schedule installations during maintenance windows
- Coordinate with teams that depend on the infrastructure
Rolling Deployments
- Install Edera on nodes one at a time (or in small batches)
- Verify each node before moving to the next
- Maintain cluster capacity during the rollout
Node Sizing
Resource Overhead
- Edera’s microVM isolation adds minimal overhead (typically 5-10%)
- Size nodes with this overhead in mind
- Monitor resource usage during initial deployments
Memory Considerations
- Each Edera zone requires a small amount of memory for the microVM
- For workloads with many small containers, ensure adequate node memory
- Use node resource monitoring to validate sizing (covered in Module 6)
RuntimeClass Strategy
Default to Edera for Sensitive Workloads
Use Kubernetes admission controllers to automatically apply runtimeClassName: edera to specific workloads:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: edera-for-production
spec:
rules:
- name: add-edera-runtime
match:
resources:
kinds:
- Pod
namespaces:
- production
- finance
- healthcare
mutate:
patchStrategicMerge:
spec:
runtimeClassName: ederaMixed Runtime Environments
Not all workloads need hardware isolation. Use Edera where it matters:
- High isolation needs: Customer data processing, CI/CD jobs, multi-tenant workloads, privileged operations
- Standard containers: Internal tools, non-sensitive batch processing
This lets you optimize cost and performance while maximizing security where it counts.
Security Hardening
Container Registry Access
Secure Your GAR Key
- Store the GAR key in a secrets management system (Vault, AWS Secrets Manager, etc.)
- Rotate credentials periodically
- Limit access to the key to only necessary personnel
Private Registries
- Use private container registries for your application images
- Ensure nodes can pull from your registries
- Configure imagePullSecrets for Kubernetes deployments
Network Policies
Edera isolation complements network policies—use both:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: edera-workload-policy
spec:
podSelector:
matchLabels:
runtime: edera
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
egress:
- to:
- podSelector:
matchLabels:
app: databaseMonitoring & Observability
Service Health
Monitor Edera services on each node:
systemctl list-units --type=service | grep protectExpected services:
protect-cri- Container Runtime Interfaceprotect-daemon- Core daemonprotect-networking-daemon- Networkingprotect-orchestrator- Orchestrationprotect-storage-daemon- Storageprotect-preinit- Pre-initialization
Set up alerts if any of these services fail.
Log Aggregation
Collect Edera logs for centralized analysis:
journalctl -u protect-cri --since "1 hour ago"
journalctl -u protect-daemon --since "1 hour ago"Integrate with your existing log aggregation solution (ELK, Splunk, Datadog, etc.).
Zone Monitoring
Track active Edera zones:
protect zone listMonitor zone count and resource usage over time to understand workload patterns.
Backup & Recovery
Configuration Backup
RuntimeClass Definitions
- Store RuntimeClass YAML in version control
- Include in your GitOps workflows
Node Configuration
- Back up kubelet configuration (
/etc/default/kubelet) - Document any custom configurations applied during installation
Disaster Recovery
Node Failure Scenarios
- If an Edera node fails, Kubernetes automatically reschedules pods to healthy nodes
- Ensure you have sufficient capacity for node failures
- Test failover scenarios in staging environments
Reinstallation Process
- Keep installation scripts and credentials in a secure, accessible location
- Document your specific installation parameters
- Practice reinstallation in test environments
Performance Optimization
Workload Placement
Node Affinity Use node affinity to control which workloads run on Edera nodes:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
runtimeClassName: edera
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: runtime
operator: In
values:
- ederaResource Limits Set appropriate resource limits for Edera workloads:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Upgrades & Maintenance
Edera Updates
When Edera releases updates:
- Test in staging - Deploy updates to a test environment first
- Review release notes - Understand changes and potential impacts
- Rolling updates - Update nodes incrementally
- Monitor closely - Watch for issues after each node update
Kubernetes Upgrades
Edera is compatible with Kubernetes 1.24+. When upgrading Kubernetes:
- Verify Edera compatibility with the new Kubernetes version
- Test the upgrade in a non-production environment
- Follow Kubernetes best practices for version upgrades
- Monitor Edera services after the upgrade
Support & Documentation
When You Need Help
- Documentation: https://docs.edera.dev
- Customer Portal: https://customer.edera.dev/
- Email Support: support@edera.dev
Information to Provide
When contacting support, include:
- Edera version (check AMI or installer version)
- Kubernetes version
- Cloud platform (AWS, Azure, on-prem, etc.)
- Error messages and relevant logs
- Steps to reproduce the issue
Module Complete
You now know how to install Edera across different environments and follow production best practices.
Next module: Module 5: Troubleshooting →
