One of the early feature requests for minimega was a scheduler that would launch VMs across a cluster of machines as easily as VMs are launched on a single machine. In minimega 2.3, we introduced the concept of namespaces, which attempts to provide this functionality. In minimega 2.4, we enabled namespaces by default.


namespaces are a way to automatically pool resources across a cluster. Specifically, namespaces allow you to configure and launch VMs without worrying too much about which host that they actually run on. namespaces also provide a logical separation between experiments, allowing for multitenancy among cooperating users.

One of the design goals for namespaces was to minimize changes to the existing API. Specifically, we wanted to allow users to create the same scripts to run experiments on a single host and on a cluster of hundreds of hosts. To support this, there are minimal changes to the existing APIs (except behind the scenes, of course) and a few new namespace-specific APIs.

Default namespace

By default, minimega starts out in the minimega namespace. This namespace is special for several reasons:

namespace API

namespaces are managed by the namespace API. For example, to create a new namespace called foo and set it as the active namespace:

minimega[minimega]$ namespace foo

Now that the namespace foo is active, commands will apply only to resources, such as VMs, that belong to the namespace. In a clustered environment, a newly-created namespace includes all nodes in the mesh except the local node, which is treated as the head node. When there are not any nodes in the mesh, the namespace includes just the local node.

To return to the default namespace, use:

minimega[foo]$ clear namespace

When run without arguments, namespace prints summary info about namespaces:

minimega[minimega]$ namespace
namespace | vms | vlans    | active
foo       | 0   |          | false
minimega  | 0   | 101-4096 | true

To make it easier to run commands that target a namespace, users may prefix commands with the namespace they with to use. For example, to display information about VMs running inside the foo namespace, any of the following work:

minimega[minimega]$ namespace foo
minimega[foo]$ .columns name,state,namespace vm info
name     | state    | namespace
vm-foo-0 | BUILDING | foo

minimega[minimega]$ namespace foo .columns name,state,namespace vm info
name     | state    | namespace
vm-foo-0 | BUILDING | foo

minimega[minimega]$ .columns name,state,namespace namespace foo vm info
name     | state    | namespace
vm-foo-0 | BUILDING | foo

Finally, to delete a namespace, again use the clear namespace API:

minimega$ clear namespace foo

Deleting a namespace will clean up all state associated with the namespace including: killing VMs, stopping captures, deleting VLAN aliases, removing host taps.

ns API

The ns API allows users to view and configure parameters of the active namespace such as which hosts belong to the namespace.

To add hosts to the namespace, use ns add-hosts:

minimega[foo]$ ns add-hosts ccc[1-10]

minimega only adds hosts that are already part of the mesh.

To remove hosts, use ns del-hosts:

minimega[foo]$ ns del-hosts ccc[1,3,5,7,9]

To display the list of hosts, use ns hosts:

minimega[foo]$ ns hosts

An important parameter is whether VMs should be queued or not. This is configured by the ns queueing option which defaults to false. See the Launching VMs section below for an explanation of queueing.

The ns API also allows you to control parameters of the scheduler such as how the scheduler determines which host is the least loaded. This is done via the ns load API:

minimega$ ns load cpucommit

See the Scheduler section below for a description of the different ways the scheduler can compute load.

ns can also be used to display the current VM queue with ns queue and information about the schedules it has run so far with ns schedules.

Launching VMs

VMs are configured with the vm config APIs. Each namespace has a separate vm config to prevent users from clobbering each other's configurations.

When queueing is enabled and when the user calls vm launch the specified VMs are not created immediately -- they are instead added to a queue. This queue allows the scheduler to make smarter decisions about where it launches VMs. For example, the scheduler could schedule VMs with the same VLANs or disk image on the same host.

Each call to vm launch queues a new VM:

minimega[minimega]$ namespace foo
minimega[foo]$ ns queueing true
minimega[foo]$ vm launch kvm a
minimega[foo]$ vm launch kvm b
minimega[foo]$ vm info
minimega[foo]$ ns queue
... displays VM configuration for a and b ...

Calling vm launch with no additional arguments flushes the queue and invokes the scheduler:

minimega[foo]$ vm launch
minimega[foo]$ ns schedules
start               | end                 | state     | launched | failures | total | hosts
02 Jan 06 15:04 MST | 02 Jan 06 15:04 MST | completed | 1        | 0        | 1     | 1

The scheduler, described below, distributes the queued VMs to nodes in the namespace and starts them. Once the queue is flushed, the VMs become visible in vm info.


The scheduler for namespaces is fairly simple -- for each VM, it finds the least loaded node and schedules the VM on it. Load is calculated in one of the following ways:

* CPU commit      : Sum of the Virtual CPUs across all launched VMs.
* Network commit  : Sum of the count of network interfaces across all launched VMs.
* Memory load     : Sum of the total memory minus the total memory reserved for all launched VMs.

These values are summed across all VMs running on the host, regardless of namespace. This means that the scheduler will avoid launching new VMs on already busy nodes if there are multiple namespaces are using the same nodes or there are VMs running outside of a namespace.

In order to allow users to statically schedule some portions of their experiment (such as when there is hardware or people in the loop), we have added two new vm config APIs:

* vm config schedule   : schedule these VMs on a particular node.
* vm config coschedule : limit the number of coscheduled VMs

These two APIs can be used together or separately:

minimega$ vm config schedule ccc50
minimega$ vm config coschedule 0
minimega$ vm launch kvm solo

Instructs the scheduler to launch a VM called solo on ccc50 and not to schedule any other VMs on ccc50.

minimega$ vm config coschedule 0
minimega$ vm launch kvm solo

Instructs the scheduler to launch a VM called solo on any node and not to schedule any other VMs on that node.

minimega$ vm config coschedule 3
minimega$ vm launch kvm quad[0-3]

Instructs the scheduler to launch four VMs called quad[0-3] on any node and not to schedule at most four other VMs on those nodes. Note: because of the way the least loaded scheduler works, quad[0-3] will most likely not be scheduled on the same node.

vm API

Besides the changes noted above to vm launch, all of the vm APIs are namespace-specific. These commands are broadcast out to all hosts in the namespace and the responses are collected on the issuing node. vm APIs that target one or more VMs now apply to VMs across the namespace on any host.

Note: because of the above changes, minimega now enforces globally unique VM names within a namespace. VMs of the same name can exist in different namespaces. Users should use VM names rather than IDs to perform actions on VMs since multiple hosts can have VMs with the same ID.

vlans API

Setting vlans range in the default namespace applies to all namespaces that do not have their own range specified.

cc API

minimega starts a separate cc server for each namespace. Each server creates a separate miniccc_response directory in the files directory.

host API

The host API broadcasts the host command to all hosts in the namespace and collects the responses on the issuing host when a namespace is active. Otherwise, it only reports information for the issuing node.

capture API

The capture API is partially namespace-aware. Specifically, the commands to capture traffic for a VM work perfect with namespaces -- traffic will be captured on the node that runs the VM and can be retrieved with file get when the capture completes. Capturing traffic on a bridge (PCAP or netflow) is not advised -- it may contain traffic from other experiments. See help capture for more details.


The minimega authors

22 Mar 2016