Cloud-Native & Platform Engineering – Post KubeCon @ SUSE, Key Insights and How SUSE is Powering the Future of AI, Observability & Secure Kubernetes
Two weeks after KubeCon + CloudNativeCon Europe 2025 in London, Orlin Vasilev, a CNCF Ambassador, has organized a Cloud-Native & Platform Engineering – Post KubeCon @SUSE meet-up in Sofia, Bulgaria gathering participants from the latest Kubernetes conference to share their insights and experiences.
The energy at Kubecon London was electric, with thousands of attendees converging to explore the cutting edge of Kubernetes, cloud-native platforms, and — perhaps more than ever before — the transformative power of AI and Machine Learning at scale. The AI Day was all about GPU Optimization, Security/Guardrails and Observability in the AI!
From LLMOps pipelines to secure edge inference, the conversations made it clear: Kubernetes is no longer just the backbone of container orchestration — it’s fast becoming the go-to foundation for intelligent, secure, and observable platforms.
In this post, we break down the key themes from the Kubecon and highlight the SUSE play.
🚀1. AI/ML + LLMOps Are Production-Ready — and Kubernetes is the Launchpad
What We Heard:
AI was everywhere — not just in theory, but in practice. Talks covered:
- End-to-end LLMOps lifecycle management with OSS tools like KServe, Kubeflow, and Kueue.
- Agentic AI patterns to streamline operations like Infrastructure as Code.
- Edge-to-cloud pipelines using Wasm, KubeEdge, and Rust-based inference.
Where SUSE fits:
SUSE Rancher Prime orchestrates LLM workloads across hybrid clouds and edge environments.
SUSE AI (Emerging) aims to offer integrated LLMOps support with secure, scalable deployment models. We are providing our customers with the freedom to choose the LLM that works best for their AI use cases.
SUSE Virtualization (former Harvester), our HCI solution, offers cost-effective GPU and compute infrastructure — perfect for AI workloads at scale.
📈 2. Observability Is Now AI-Aware
What We Heard:
- Observability stacks need to evolve for LLM performance, fairness, and cost monitoring.
- Tools like OpenTelemetry are extending into AI pipelines and distributed training workloads.
Where SUSE Fits:
- SUSE Observability, powered by OpenTelemetry, delivers unified metrics, logs, and traces — with support for AI inference observability.
- SUSE AI is launching later in April 2025 Observability Dashboard for the AI components
- Rancher integrates OTel natively, simplifying visibility across multi-cluster deployments.
🔐 3. Security, Trust & Compliance Take Center Stage
What We Heard:
- AI introduces new risks: adversarial attacks, data leakage, model spoofing.
- Talks focused on using confidential computing, runtime protection, and secure model serving to mitigate threats.
- Growing urgency around EU AI Act and Cyber Resilience Act compliance.
Where SUSE Fits:
- With SUSE AI we are developing GreenDocs for Guardrails technologies. Also we are partnering with companies like Guardrails ai and Styrk AI to provide additional capabilities within the SUSE AI stack. SUSE partnership with Infosys gives joint customers access to their Responsible AI which provides advanced defensive technical guardrails and enhances model transparency by providing insights into the rationale behind AI-generated output, without compromising on performance or user-experience.
- SUSE Security (former NeuVector) provides deep runtime protection, scanning, and eBPF-based security — ideal for protecting AI pipelines and inference endpoints.
- SLE Micro and Confidential Containers support emerging compliance standards.
- SUSE is committed to delivering trusted, secure supply chains for all workloads — including GenAI.
🧱 4. Platform Engineering for the AI Era
What We Heard:
- Developers demand self-service platforms that abstract infrastructure complexity — especially for AI/ML workflows.
- GitOps, Kubernetes-native CI/CD, and internal developer platforms (IDPs) were top-of-mind.
Where SUSE Fits:
- Rancher Fleet + GitOps empowers teams to build AI platforms-as-products.
- SUSE AI is extending this experience into agentic AI workflows. We are providing Green Docs to help customers interact with the large language models (LLMs) as they need
🌍 5. Edge AI & HPC: One Platform to Rule Them All
What We Heard:
- Real-time AI is moving to the edge — from bank branches to aerospace.
- Integration with supercomputing via Kubernetes (EuroHPC, INFN) is becoming a reality.
Where SUSE Fits:
- SUSE Edge (SLE Micro) provides lightweight, immutable Linux for edge inference.
- SUSE Virtualization + SUSE Rancher Prime unify management across edge, data center, and HPC.
- This positions SUSE uniquely to power distributed training, federated learning, and multi-environment AI deployments.
Final Thoughts
As AI, security, and observability converge within Kubernetes, SUSE is building the tools to meet enterprise demand — with an open source-first mindset, hybrid flexibility, and a commitment to trust and compliance.
KubeCon 2025 proved that AI is the next strategic workload to modernize the enterprise, and SUSE is ready to lead!
If you are interested to discuss SUSE AI Deployments, contact our team suse-ai-pm@suse.com
If you are interested to discuss potential SUSE AI partnerships , contact me directly: martina.ilieva@suse.com