A bit of context : Kamea, a platform initially deployed on Azure IoT Hub
Kamea was born over four years ago with a clear mission: provide a turnkey IoT device management platform that clients can deploy quickly, while remaining fully customizable. Our objectives were:
- Fast onboarding: A ready-to-use platform for rapid deployment.
- Scalability: Support fleets of tens of thousands of devices.
- Ease of maintenance: Simplify day-to-day operations for teams that aren’t cloud experts.
Initially, we chose a managed PaaS approach on Azure IoT Hub. It was familiar, reliable, and cost-effective for early deployments. Our teams were already skilled with Azure, and many clients trusted it. This made sense at the time, but needs evolve.
Why move away from PaaS?
Over time, we faced new challenges:
- Clients wanted more control over hosting and sometimes preferred on-premises deployments.
- Customization became harder within the constraints of PaaS.
- We needed greater flexibility and independence from a single cloud provider.
About a year ago, we began exploring Kubernetes as an alternative.
What is Kubernetes?
Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of IoT applications. It groups containers into logical units, making it easier to run complex, distributed systems reliably.
Why Kubernetes?
Kubernetes offered several advantages:
- Cloud independence: No vendor lock-in.
- Flexibility and resilience: Better control over services and deployments.
- Scalability: Automated scaling for large device fleets.
- Open source: Builds client confidence in long-term viability.
- Robustness: A mature, widely adopted technology.
In short, Kubernetes gave us the freedom to design an infrastructure tailored to our needs while maintaining reliability.
From Azure IoT Hub to Kubernetes: The architecture shift
Azure-based infrastructure
Our original setup relied on Azure services such as:
- App Service for web applications
- Azure Functions for telemetry processing
- IoT Hub for device fleet management
- Service Bus for message routing
This worked well but tied us closely to Azure. In order to propose a more self-managed system for customers that wanted out of Azure IoT Hub we needed to build another architecture for our cloud platform Kamea…
Kubernetes architecture
Today, Kamea runs on Kubernetes with:
- Pods hosting management interfaces and telemetry flows (HTTP and MQTT)
- Redis and RabbitMQ at the core for data and message handling
- External managed databases for now, with the option to bring them inside the cluster later
- This hybrid approach balances flexibility with operational simplicity.
Deployment and automation
We use GitLab pipelines for CI/CD, combined with Terraform and OpenTofu modules to automate infrastructure provisioning. Clients can reuse our templates to customize deployments across Azure, AWS, or any environment with Kubernetes, even on-premises.
Automation and modularity are key: clients get a tailored platform without sacrificing simplicity.
Migration steps
Migrating from Azure IoT Hub to Kubernetes was not a simple lift-and-shift operation. It required a structured approach to ensure continuity and reliability.
1. Audit
We began with an initial audit of our Azure environment to identify all the services we were using. These services fell into two categories:
Standard services: These were based on widely used open-source technologies, such as PostgreSQL databases. Migrating these was relatively straightforward because Kubernetes offers native controllers and Helm charts for these components.
Value-added services: These were proprietary Azure services like IoT Hub and Service Bus, which provided unique capabilities but created strong vendor dependency. For these, we needed to find open-source alternatives that could replicate the same functionality within Kubernetes.
2. Implementation
Once the audit was complete, we moved to the implementation phase. For every Azure service, we mapped an equivalent Kubernetes-compatible solution. For example, we replaced IoT Hub and Service Bus with RabbitMQ, which could handle both MQTT messaging and internal message brokering. Similarly, we adopted PostgreSQL with TimescaleDB for time-series telemetry data. This mapping exercise was critical to ensure feature parity and maintain the robustness of our platform.
Finally, we automated the deployment process using GitLab pipelines combined with Terraform and OpenTofu modules. This allowed us to provision infrastructure consistently across different environments (Azure, AWS, or on-premises) while giving clients the flexibility to customize deployments to their needs.
RabbitMQ: The messaging backbone
RabbitMQ plays a central role in our Kubernetes-based architecture. It is an open-source message broker known for its reliability, maturity, and flexibility. In Kamea, RabbitMQ serves two critical functions:
- MQTT Message Handling: RabbitMQ acts as the entry point for all MQTT messages coming from our device fleet. This ensures that telemetry data flows seamlessly into the system for processing.
- Internal Message Routing: Beyond device communication, RabbitMQ manages the exchange of messages between different microservices within the platform. This includes status updates, event notifications, and command execution.
By consolidating these functions into RabbitMQ, we achieved a more streamlined and predictable messaging architecture. Its proven stability and active community support give us confidence in its long-term viability. Moreover, being open source, RabbitMQ aligns perfectly with our goal of reducing vendor lock-in and maintaining full control over our infrastructure.
Pros and cons of Kubernetes architecture
Advantages
Migrating to a Kubernetes architecture brought several tangible benefits.
- First, we gained speed and predictability in deployments. Kubernetes allows us to roll out updates with minimal disruption, thanks to features like rolling updates and self-healing.
- Second, we now have full control over our infrastructure, which means we can customize deployments to meet specific client requirements without being constrained by a cloud provider’s limitations.
- Third, Kubernetes enables multi-cloud and on-premises flexibility, allowing us to deploy Kamea wherever our clients need it (Azure, AWS, or their own data centers).
- Finally, Kubernetes improves scalability, making it easier to manage large fleets of IoT devices efficiently.
Challenges
However, the transition was not without its challenges.
- The most significant hurdle was the learning curve for our teams. Kubernetes introduces new concepts such as pods, worker nodes, services, and labels, which required time and training to master.
- We also faced complexity in managing hybrid environments, where some components remain outside Kubernetes, such as managed databases or security services. This created additional work in managing permissions and integrations.
- Lastly, Kubernetes brings a philosophical shift in how infrastructure is managed, relying heavily on declarative configurations and automation. While powerful, this approach demanded a cultural change within our development teams.
Conclusion
Creating a new version of Kamea on Kubernetes gave us flexibility, independence, and scalability. Today, Kamea runs on multiple cloud environments (Azure, AWS) or on-premises, depending on customers’ preferences. Building a Kubernetes architecture wasn’t just a technical upgrade, it was a strategic move to empower our clients and future-proof our Kamea device management platform.




