An infrastructure platform for distributed cloud

Centaurus is an open-source platform for building a unified and scalable distributed cloud infrastructure

High Scalability

Centaurus offers high scalability for large cloud infrastructure

Unified Management

Provides the same API and platform to orchestrate various cloud resources

Edge Computing

Seamlessly extend from cloud to edge, and natively manage edge workloads from cloud

Shaping the future of cloud native infrastructure in the age of AI, 5G and Edge

Build your next infrastructure with Centaurus

Next generation infrastructure platform for large scale cloud as well as for AI and Edge computing

Centaurus is an open source platform for building a scalable distributed cloud infrastructure that unifies the orchestration and management of VM, Container and Serverless workloads. As a cluster management platform, Centaurus provisions and manages the compute and network resources across cloud data centers and edge sites at scale. Centaurus is also a self-learning elastic platform for AI workloads with optimized GPU sharing and scheduling algorithms. Centaurus aims to be the next generation cloud infrastructure that can simplify your cloud management and reduce your management cost in the age of Edge, AI, and 5G.

Centaurus Technologies

Arktos

Star

Arktos is a compute cluster management system designed for large scale clouds.

Mizar

Star

Mizar is the high-performance cloud-network powered by eXpress Data Path (XDP) and Geneve protocol for high scale cloud.

Fornax

Star

Fornax is an autonomous, flexible, fault tolerant and scalable edge computing framework.

Alnair

Star

Alnair is a self-learning elastic platform for AI workloads at scale, with advanced scheduling and resource management strategy.

Centaurus Features

Centaurus provides next generation cloud solutions and enhanced cloud networking.

  • Unified Orchestration

    Natively manages VM, containers, and other types of workloads on demand in a unified manner.

  • Multi-tenant Cloud Platform

    Provides built-in hard multi-tenancy support for strong tenancy isolation and transparent virtual cluster.

  • Large Scale and High Performance

    Addresses key scalability and performance challenges facing super large cloud infrastructure.

  • Hyper-Scale Unified Cloud Network

    Fast provisioning and management of virtual network resources for VMs, containers and other types of workloads. Leverages natural partitioning of cloud networks such as VPC and subnet for large-scale endpoint provisioning and routing capabilities.

  • High Performance Data Plane

    Leverages eXpress Data Path(XDP) and Geneve protocol to forward and route traffic between workloads in a highly performant manner.

  • Cloud Native Networking Management Plane

    Mizar's management plane is built around cloud-native architecture approach using Kubernetes Architecture Framework. It provides support for key characteristics such as scalability, elasticity etc.

  • Autonomous & Fault Tolerant

    Allows workload to continue functioning in time of both network and edge cluster node failure. Edge clusters, once online, accpet workload assignments automatically in the "zero touch" fashion.

  • Flexible Edge Cluster Flavors

    Supports running K8s, K3s, Arktos, etc. as edge cluster choice. No "locking-in" to any flavor.

  • Hierarchical Topology

    Allows interconnecting and managing edge clusters in a "flat" 1-layer structure, or a tree-like multi-layer hierarchy, whichever matches the user scenario. Your edge scenario, your call.

  • Distributed Edge Networking

    For 5G and MEC scenarios especially, supports virtualized distributed edge networking such as VPC, subnet, service discovery, and direct edge-edge routing via XDP techology with high efficiency and performance.

  • Unified Elastic Framework

    A unified framework for elastic and non-elastic distributed training, resource rebalance with no interruption.

  • Multifunctional Profiler

    Provides multi-level resource utilization monitoring and self-trigger trial job.

  • AI Oriented Scheduler

    A learning based utilization driven strategy for co-scheduling and GPU affinity awareness.

  • GPU Sharing

    Built-in intercept library to support fractional GPU resource allocation with reliable and QoS guaranteed.

Partners and Supporters

Gridgain logo
Click2Cloud logo
Reinvent logo
Tuwien logo
Soda Foundation logo
Futurewei logo