This is the second post in my series about the service catalog. If you haven’t done already please read the first post: service catalog: introduction.
In this second post I’ll create from scratch a spring boot application that exposes a JPA crud via rest. This application will use a service catalog managed microsoft sql server database and I will demonstrate how you can automagically connect to it using the service catalog connector.
There is a spring cloud project called spring cloud connectors. This project is all about connecting to cloud managed services. I have been working on an implementation specific to the service catalog. The idea is that you can use the service catalog to manage the services and use the service catalog connector to transparently connect to it.
This is the first of a series of posts around the service catalog. The end goal is to demonstrate how the service catalog can simplify building apps on kubernetes and openshift.
The first part will cover:
The target environment will be openshift 3.10 on Linux using `oc cluster up` for development purposes.
Working with kubernetes since its early days, there are countless of times where I had to go into creating manifests for the services my application is using. By services I am referring to things like databases, messaging systems, or any other pieces of third party software my application might need.
This is a small post that describes how I made authoring markdown, org-mode etc easier by using snippets that help me handle links like a pro.
I am a heavy user of org-mode. I use it for taking notes, writing blogs, presentations and so on. As a software developer I often use markdown too. In both cases at some point I have to deal with links.
Embarrassingly enough, I used to rely on my browsers bookmarks to handle links, so my workflow looked a little like:
Every now then I see on social media people sharing the same old story: “Using shell scripting to workaround the limitations of their DevOps tools”. I’ve done it, my colleagues are doing it and most likely you have done it yourself.
So it seems that shell scripting is used to do the dirty work, yet its often considered by many the last resort. If you search on the web about popular ‘DevOps’ tools and skills, you’ll probably find:
Lately I keep hearing about “how much software development has changed over the last half of the decade”. This usually refers to the adoption of containers, cloud etc. I would like to focus on an other factor of the change and that is the plethora of development related systems and services.
So its typical for a team to have:
Add email to that and you realize that most of development related tasks now days take place in the browser. Unfortunately, browsers by nature are unaware of the content they serve, so its not trivial to automate your workflow in the browser. So, if the browser is not going to play the role of ‘Swiss army knife’ for development then what?
During the summer I had the chance to play a little bit with Jenkins inside Kubernetes. More specifically I wanted to see what’s the best way to get the Docker Workflow Plugin running. So, the idea was to have a Pod running Jenkins and use it to run builds that are defined using Docker Workflow Plugin. After a lot of reading and a lot more experimenting I found out that there are many ways of doing this, with different pros and different cons each. This post goes through all the available options. More specifically:
openshift takes security seriously. Sometimes more seriously than I’d like (mostly cause I am lazy). One such example is the fact that containers run using arbitrary users. This is done as an extra measure to control damages, should a process somehow escapes its container boundaries.
But how does it affect users?
Users need to follow certain guidelines when creating container images.
you don’t have a known uid The uid of the user is not known in advnace. Also there is no way of controlling it.
Yesterday I was having a talk with Adrian Cole and during our talk he had an unpleasant surprise. He found out that he forgot a node running on his Amazon EC2 for a couple of days and that it would cost him a several bucks.
This morning I was thinking about his problem and I was thinking of ways that would help you avoid situations like this.
My idea was to build a simple project that would notify you of your running nodes in the cloud via email at a given interval.
I am currently returning home from JavaOne 2011. I am at the airport of Munich waiting for my connecting flight to Athens. Once again the flight my flight is delayed and its a great chance to blog a bit about JavaOne.
I had the chance to make a BOF about Karaf Cellar last Tuesday night. Even though the presentation was really late (20:30) and there were a lot of parties going on at this time (actually I was at the Jboss party right before my presentation) there were quite a few people that attended. The best part was that most of the people who attended were really eager to hear about Karaf & Cellar and I received a lot of great “straight to the point” questions. So I really enjoyed the talk and had a lot of fun.
In some previous blog post, I designed and implemented Cellar (a small clustering engine for Apache Karaf powered by Hazelcast). Since then Cellar grew in features and eventually was accepted inside Karaf as a subproject.
This post will provide a brief description of Cellar as it is today.
Cellar is designed so that it can provide Karaf the following high level features
The core concept behind cellar is that each node can be a part of one ore more groups, that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest group members.
The last couple of years OSGi and Cloud Computing are two buzz words, that you don’t see go hand in hand that often. JClouds is going to change that, since 1.0.0 release is OSGi ready and it also provide direct integration with Apache Karaf.
The last couple of weeks I have been working with the jclouds team in order to improve the OSGification of jclouds and also to provide integration with Apache Karaf. I will not go into much detail in this post, since there is a [[wiki. I will add however a small demo that shows how easy it is.

Presented on OSGi and Apache Karaf on Java Hellenic User Group.
It was a great event with very interesting presentations. The full list of presentations can be found here.
Regarding my presentation, I was a bit nervous at first, since I hadn’t practiced my “presentation” skills for a while, but things got better as time went by. I’ve had the chance to meet a lot of interesting people and discuss about OSGi, Apache Karaf & Apache ServiceMix. The slides of the presentation can be found at: Slide Share.
EDIT: The project “cellar” has been upgraded with a lot new features, which are not described by this post. A new post will be added soon.
I have been playing a lot with Hazelcast lately, especially pairing it with Karaf. If you haven’t done already you can read my previous post on using Hazelcast on Karaf.
In this post I am going to take things one step further and use Hazelcast to build a simple clustering engine on Karaf.
The last months Hazelcast caught my attention. I first saw the JIRA of the camel-hazelcast component, then I read about it, I run some examples and eventually I fell in love with it.
If you are not already familiar with it, Hazelcast is an opensource clustering platform, which provdies a lot of features such as:
You can visit the Hazelcast Documentation for more information. In this blog post I will show how to run hazelcast on Apache Karaf or Apache ServiceMix and I will provide an example application that creates a hazelcast instance, deploys the hazelcast monitoring web application and adds a couple of shell commands on Apache Karaf.
I am currently in the middle of my Xmas vacation and I was just about to download a movie for tonight. While downloading, I checked my emails, which I haven’t really checked since Christmas Eve.
An invitation to join the Apache ServiceMix project as a committer was waiting for me on the top of my Inbox.

Of course I accepted the invitation and I immediately started blogging about it… That’s a great ending for 2010 but its also a serious indication that I am going to need a time trasplant for 2011!
I just returned home from Java ONE and Oracle Develop 2010 (which was also my first ONE) and I thought that it would be a good idea to take 5 minutes and share the experience.
The city of San Francisco was awesome and I couldn’t find any other place in the world that could be best for the job. The weather, the size and the facilities where exactly what such event required. The organization was good enough and there were tons of sessions that I found exciting.
Karaf 2.1.0 has been just released! Among other new features, it includes a major revamp in the JAAS module support:
This post will use all 3 features, in order to create a secured Wicket application on Karaf, using Karaf’s JAAS modules and Wicket’s auth-roles module.
The application that we are going to build is a simple wicket application. It will be deployed on Karaf and the user credentials will be stored in a mysql database. For encrypting the password we will use Karaf’s Jasypt encryption service implementation, to encrypt passwords using MD5 algorithm in hexadecimal format.
1 week after my vacation and still suffering from “post vacation depression”, this Monday seemed like a nightmare.
I went to the office and I was feeling the urge to go get my self a huge Carafe of coffee (cups have long been proven inefficient), when an icoming email draw my attention.
It was an invitation to join Apache Karaf team as a committer.
This is the first open source project I join and I’m very thrilled (if not overreacting) about it and that’s why I decided to blog about it.
EDIT: Hibernate is now OSGi ready so most of those stuff are now completely outdated.
The full source for this post has moved to github under my blog project on branch: wicket-spring-3-jpa2-hibernate-osgi-application-on-apache-karaf.
Recently I attempted to modify an existing crud web application for OSGi deployment. During the process I encountered a lot of issues such as
Lack of OSGi bundles. Troubles wiring the tiers of the application together. Issues on the OSGi container configuration. Lack of detailed examples on the web. So, I decided to create such a guide & provide full source for a working example (A very simple person crud application).
EDIT: I am more than happy that this post is now completely obsolete. Hibernate is now OSGi ready, Yay!
I was trying to migrate an application that uses JPA 2.0 / Hibernate to OSGi. I found out that hibernate does not provide OSGi bundles. There are some Hibernate bundles provided in the Spring Enterprise Bundle repository, however they are none available for Hibernate 3.5.x which implements JPA 2.0. So I decided to create them myself and share the experience with you.