Common Issues & Solutions
Most Edera issues fall into predictable categories. This section covers the most common problems and their solutions.
Issue 1: Kubelet Fails After Installation
Symptoms:
- Kubelet service fails to start after Edera installation
- Node shows
NotReadyinkubectl get nodes - Kubelet logs show CRI connection errors
Cause: Incorrect or missing kubelet configuration for the Edera CRI socket.
Solution:
Check kubelet configuration:
cat /etc/default/kubeletAmazon Linux - Correct configuration:
KUBELET_EXTRA_ARGS="--container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"Linode/Akamai (LKE) - Correct configuration:
KUBELET_EXTRA_ARGS="--cloud-provider=external --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"If the configuration is missing or incorrect:
# Amazon Linux
echo 'KUBELET_EXTRA_ARGS="--container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"' > /etc/default/kubelet
# Linode/Akamai
echo 'KUBELET_EXTRA_ARGS="--cloud-provider=external --container-runtime-endpoint=unix:///var/lib/edera/protect/cri.socket"' > /etc/default/kubelet
# Restart kubelet
systemctl restart kubelet
systemctl status kubeletIssue 2: Node Hasn’t Rebooted Since Installation
Symptoms:
- Edera services start but pods fail to run
- Hypervisor not loaded
Cause: The Edera hypervisor requires a node reboot to load at boot time.
Solution:
Check node uptime:
uptimeIf the node has been running since before the Edera installation, reboot:
rebootIssue 3: RuntimeClass Not Found
Symptoms:
- Pods remain in
Pendingstate - Events show:
RuntimeClass "edera" not found
Cause: The Edera RuntimeClass hasn’t been created in Kubernetes.
Solution:
Create the RuntimeClass:
kubectl apply -f https://public.edera.dev/kubernetes/runtime-class.yamlVerify it was created:
kubectl get runtimeclassIssue 4: Pods Stuck in Pending
Symptoms:
- Pods with
runtimeClassName: ederaremain inPendingstate - No errors in pod events
Cause: Pods may be pending due to scheduling constraints, resource availability, or node readiness.
Diagnosis:
Check pod events:
kubectl describe pod <pod-name>Look for scheduling-related messages.
Common causes and solutions:
No nodes available:
- Verify nodes are
Ready:kubectl get nodes - Check node taints:
kubectl describe node <node-name> | grep Taint - Ensure nodes have sufficient resources
Node affinity/tolerations:
- If using node selectors or affinity rules, verify nodes are labeled correctly
- Check for taints that prevent scheduling
Resource constraints:
- Pod resource requests exceed available node capacity
- Reduce requests or add more nodes
Issue 5: Edera Services Not Running
Symptoms:
- One or more Edera services show as
failedorinactive - Pods fail to start with CRI errors
Cause: Service failure due to misconfiguration, missing dependencies, or system issues.
Diagnosis:
Check service status:
systemctl status protect-cri
systemctl status protect-daemonReview service logs:
journalctl -u protect-cri -n 100
journalctl -u protect-daemon -n 100Solution:
Attempt to restart the failed service:
systemctl restart protect-cri
systemctl status protect-criIf the service fails to start, review logs for specific error messages.
Common issues:
- Permission errors: Check file permissions on Edera binaries and sockets
- Port conflicts: Ensure no other services are using Edera’s ports
- Missing dependencies: Verify all Edera components are installed
Issue 6: Pods Running but Not Isolated
Symptoms:
- Pods start successfully
kubectl describe podshowsRuntime Class Name: edera- But pods don’t appear in
protect zone list
Cause: RuntimeClass is configured but pods aren’t actually being routed to the Edera runtime.
Diagnosis:
SSH to the node and check:
protect zone listIf the pod is missing from the zone list, the CRI isn’t handling it.
Solution:
Verify kubelet is using the Edera CRI socket:
cat /etc/default/kubeletRestart kubelet if configuration is correct:
systemctl restart kubeletDelete and recreate the pod:
kubectl delete pod <pod-name>
kubectl apply -f <pod-definition.yaml>Issue 7: Performance Degradation
Symptoms:
- Pods running slower than expected
- High resource usage on nodes
Cause: Resource contention, insufficient node sizing, or workload characteristics.
Diagnosis:
Check node resource usage:
kubectl top nodes
kubectl top pods --all-namespacesReview Edera zone resource consumption:
ssh root@<node-ip>
protect zone listSolution:
Insufficient resources:
- Scale up node instance types
- Add more nodes to the cluster
Resource limits too low:
- Increase pod resource requests and limits
- Review and adjust based on actual usage
Workload optimization:
- Review application performance bottlenecks
- Consider workload-specific tuning
Quick Reference
| Issue | Quick Fix |
|---|---|
| Kubelet fails | Check /etc/default/kubelet configuration |
| Services down | systemctl restart protect-cri protect-daemon |
| RuntimeClass missing | kubectl apply -f runtimeclass.yaml |
| Pods pending | Check node status and resources |
| Node not rebooted | reboot |
| Image pull errors | Verify registry credentials |
Up next: Debugging Workflows →
