Apache Karaf Cellar

Apache Karaf Cellar

In some previous blog post, I designed and implemented Cellar (a small clustering engine for Apache Karaf powered by Hazelcast). Since then Cellar grew in features and eventually was accepted inside Karaf as a subproject.

This post will provide a brief description of Cellar as it is today.

Cellar Overview
Cellar is designed so that it can provide Karaf the following high level features

  • Discovery
      • Multicast 
      • Unicast
  • Cluster Group Management
      • Node Grouping
  • Distributed Configuration Admin
      • per Group distributed configuration data
      • event driven distributed / local bridge
  • Distributed Features Service
      • per Group distributed features/repos info
      • event driven distributed / local bridge
  • Provisioning Tools
      • Shell commands for cluster provisioning
The core concept behind cellar is that each node can be a part of one ore more groups, that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest group members.
Each group comes with a configuration, which defines which events are to be broadcasted and which are not. Whenever a local change occurs to a node, the node will read the setup information of all the groups that it belongs to and broadcast the event to the groups that whitelist the specific event. 
The broadcast operation is happening via the distributed topic provided by the group. For the groups that the broadcast is supported, the distributed configuration data will be updated so that nodes that join in the future can pickup the change.
Supported Events
There are 3 types of events:
  • Configuration change event
  • Features repository  added/removed event.
  • Features installed/unistalled event.
For each of the event types above a group may be configured to enabled synchronization, and to provide a whitelist / blacklist of specific event ids.
The default group is configured allow synchronization of configuration. This means that whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed memory of the default group and will also be broadcasted to all other default group members using the topic.
This is happening for all PIDs but org.apache.karaf.cellar.node which is marked as blacklisted and will never be written or read from the distributed memory, nor will broadcasted via the topic. 
Should the user decide, he can add/remove any PID he wishes to the whitelist/blacklist.
Syncing vs Provisioning
Syncing (changing stuff to one node and broadcast the event to all other nodes of the group) is one way of managing the cellar cluster, but its not the only way.
Cellar also provides a lot of provisioning capabilities. It provides tools (mostly via command line), which allow the user to build a detailed profile (configuration and features) for each group.
Cellar in action
To see how all of the things described so far in action, you can have a look at the following 5 minute cellar demo:

Note: The video was shoot before Cellar adoption by Karaf, so the feature url, configuration PIDs are out of date, but the core functionality is fine.

I hope you enjoy it!

Comments (10)

  1. gembin

    Great post!

    "The core concept behind cellar is that each node can be a part of one or more groups"

    i'm not quite understand this concept:
    what's the purpose of one node can belong to several groups? if one node can belong to several groups, how to handle the conflicts when synchronizing between nodes belong to different groups? i assume different groups have different features.

  2. iocanel

    @Gembin: Thanks for your comment!

    Synchronization is default but optional. A user can configure groups that do not sync, but have a preconfigured set of configuration/features.

    With this in mind a group can be part of more than one groups, without conficting, but its up to the user.

  3. gembin

    thanks for your quick reply!

    So, if synchronization is enabled, conflicts may happen. but if a group configured not sync, means the nodes in a group are not clustered.

    From this point of view, a group is a manageable unit for a set of nodes right? and what's the relationship between group and cluster?

    please forgive me if the answer to my question is too obvious.


  4. iocanel

    @gembin: A cluster can consist of many groups. Groups are collection of nodes with similar characteristic that can share configuration, features, bundles etc.

  5. Favalos

    Hi Iocanel,

    It there and API for cellar? For example, I would like to pull all nodes in a group from my code. Is that possible?



  6. iocanel

    Most of Cellars API is part of the cellar-core module. Unfortunately, there is currently no javadoc for that.

    To pull all the nodes of a group you will need to import the GroupManager from the OSGi service registry using the interface: org.apache.karaf.cellar.core.GroupManager. Then you can call findGroupByName to get the Group. and getMembers() on the group to get all the nodes that are part of the Group.
    This might help:


  7. Sankar

    How to store and retrieve values (which needs to be distributed to other nodes in the group) to Map or Queue from my code

  8. iocanel

    Any map you are going to use will be available to all nodes of the group. Actually they will be available to all nodes, even to other groups. The same applies to queues too.

    For its own needs cellar distinguishes collection between groups by convention (having the group name as part of queue/map name).

    From your application you can use the GroupManager / ClusterManager service to get the node/groups that apply to your current node.

  9. Sankar

    How to make changes(need to add two ip address) to hazelcast.xml which should override hazelcast-default.xml (currently cellar is loaded from default). In which location i need to place that file inside fuse instance. I tried placing under etc/hazelcast.xml, but cellar is not taking the configuration from my file.

  10. iocanel

    By default your nodes will be discovered using multicast.

    In cases where you want to explicitly configure ips, you can edit:

    and add a comma separated list of ips to the property: tcpIpMembers. You can also set multicastEnabled=false if u don't want multicast.

Comments are closed.