[{"authors":null,"categories":null,"content":"Introduction In my previous post Workflows everywhere pt. 1 I tried to define workflows and enumerate their functional and non-functional requirements. The post concluded with the realization that in many case we need workflow engines to power our workflows.\nThis post defines what a workflow engine is and lists some of the most popular engines by category. Or at least that was the original intention, but there is a twist.\nWhat Is a Workflow Engine? Workflow engines are systems designed to simplify the creation and execution of workflows. They orchestrate the flow of information between the activities that compose the workflow based on predefined logic, conditions and dependencies.\nSimply put they are systems that allow users to easily design workflows then take care of executing each activity / step of the workflow passing data between them.\nWorkflow engine categorization This is the part where this posts gets interesting, as there is no single way to categorize workflow engines. One could categorize engines by purpose. Others may be more interested in the architecture characteristics of the engine. Operational is also an interesting way to categorize engines and let’s not forget about the costs and licensing of the engine.\nFor example a teleological categorization (purpose based) could introduce categories like:\nBusiness Process Management (BPM) engines Camunda (Zeebe) Flowable Data Processing engines Apache Airflow Luigi Machine Learning (ML) engines Kubeflow Pipelines Microservices orchestration engines Temporal Camunda (Zeebe) CI/CD engines Tekton Argo Workflows While an architectural categorization could introduce categories like:\nLog based engines Temporal Camunda (Zeebe) State machine based engines Flowable Activiti DAG based engines Apache Airflow Dagster Code flows Apache Camel Given the large number of workflow engines available today, it is not practical to list them all statically using different categorizations. What would be more practical is to use an interactive approach for exploring multidimensional categorizations of workflow engines. Additionally, it would be useful if it was something that could be easily updated as new engines are released or existing ones evolve.\nAn interactive Workflow Engine explorer I begun to entertain the idea of creating an interactive web application that would help me experiment and visualize different categorizations. As soon as I got the first draft it was pretty clear to me that this is something that could be useful to others as well.\nTo try it out, at: https://iocanel.com/workflow-engines. The actual code for the application can be found at: https://github.com/iocanel/workflow-engines\nI expect the content of the application to evolve over time, as new engines are released or existing ones evolve. The categorization is likely to change as well, let’s collaborate and keep it up to date!\nNext steps I intend to start visiting some of the engines listed in the application and write more detailed posts about them. Most likely, I will start with the ones that I have not used in the past or maybe the ones that I could connect to my day job, but I am open to suggestions.\n","date":1753747200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"db1afb2a23647259a4705db69f8c6670","permalink":"https://iocanel.com/2025/07/workflows-everywhere-pt.-2/","publishdate":"2025-07-29T00:00:00Z","relpermalink":"/2025/07/workflows-everywhere-pt.-2/","section":"post","summary":"Introduction In my previous post Workflows everywhere pt. 1 I tried to define workflows and enumerate their functional and non-functional requirements. The post concluded with the realization that in many case we need workflow engines to power our workflows.\nThis post defines what a workflow engine is and lists some of the most popular engines by category. Or at least that was the original intention, but there is a twist.\nWhat Is a Workflow Engine? Workflow engines are systems designed to simplify the creation and execution of workflows. They orchestrate the flow of information between the activities that compose the workflow based on predefined logic, conditions and dependencies.\n","tags":null,"title":"Workflows everywhere pt. 2","type":"post"},{"authors":null,"categories":null,"content":"Introduction Workflows are everywhere. From CI/CD pipelines, all system / data integration to business process automation. It wouldn’t be too far-fetched to say that even modern software build tools like make, maven or npm are in their own way workflow engines.\nThere are countless tools out there that help people define, execute and monitor workflows varying from simple no-code tools to complex frameworks that allow developers to define workflows in code, or even architect their software as workflows.\nToday, the rise of Agentic AI amplifies the need for workflows. As agents need to integrate and coordinate with external systems and other agents, workflows provide a structured way to manage these interactions.\nWhat is a workflow? A Workflow is a chain reaction: one task triggers the next, transforming data or documents until a goal is met.\nA more traditional definition is: A Workflow is a repeatable, orchestrated sequence of activities/steps that transforms inputs into desired outputs by passing documents, data, or work items between processing entities—humans, services, or agents—under a defined control flow, triggers, and data-management rules.\nKey Characteristics Automation of Procedures Breaking a larger process into discrete “atomic” steps that an engine (software) can schedule and execute, without human intervention once triggered. In some cases a human may be added in the loop to approve or review a step but the control of the flow should not be in the hands of the human. This is often encountered in BPMN-based workflows. The diagram below shows a simplified workflow for a loan approval process that consists of multiple automated steps and a human approval step.\nControl Flow and Dependencies As already implied by the previous example a workflow is not just a collection of activities executed without human orchestration. A workflow also defines:\nTriggers Execution order Branching Dependencies The diagram below shows these concepts in a simplified back-office workflow for processing pending orders.\nLet’s examine these concepts in more detail:\nTriggers\nTriggers initiate workflows based on events, schedules, or conditions. They can be time-based (e.g., daily at 18:00) or event-based (e.g., new data arrival).\nExecution Order\nWorkflow schemas define the precise routing logic that drives a workflow’s progression. They are often represented as Directed Acyclic Graphs (DAGs).\nBranching\nA workflow can branch into multiple paths either based on conditions (eg if-else) or for the shake of parallelism (fan-out). In messaging terminology, these branching patterns are often refered to as Content-Based Routers or Recipient Lists respectively.\nDependencies\nThe opposite of branching is merging. An activity that has multiple dependencies needs to wait until all its dependencies are satisfied before it can execute. And since we did mention messaging patterns, this is often referred to as Aggregator.\nThis requirement is important as this means that the workflow engine must track of the state and dependencies of each activity.\nImplementing Workflows Now that we have defined the functionality of a workflow, let’s take a moment to think how we could implement a workflow.\nCan we use a programming language? Of course we can. After all a computer program itself is pretty similar to a workflow. The main difference is that workflows focuses on activities, while a program focuses on instructions.\nSo why don’t we just do it ? And maybe use libraries that can help us abstract some of the commonly used patterns ?\nNon functional requirements In Software Engineering we rarely stop at defining the functional characteristics of a problem. We also need to take the quality characteristics into consideration. In other words we need to consider how well our solution needs to address this problems, we need to consider the non functional requirements\nSo what are the non-functional requirements applicable to workflows?\nPerformance The ability to execute workflows within acceptable time limits, ensuring that activities are completed efficiently. Performance metrics include:\nWorkflow startup time Activity execution latency Task scheduling efficiency Resource utilization Scalability The system should be capable of:\nHandling an increasing number of concurrent workflows Managing workflows that include a large number of steps Processing high-volume input/output data Reliability and Availability The ability to execute workflows without downtime, even in the face of failures. For example, a workflow shoud be executed even if some of the workers are down.\nDurability and Persistence The ability to persist workflow state and history, so that:\nProgress is not lost on restart or failure Execution history is available for audits or rollbacks Long-running or paused workflows can resume correctly Observability The ability to monitor workflow execution, track progress, and debug issues. Developers and operators should have access to:\nWorkflow status and …","date":175176e4,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"374c58566496a3adc562053be16941fe","permalink":"https://iocanel.com/2025/07/workflows-everywhere-pt.-1/","publishdate":"2025-07-06T00:00:00Z","relpermalink":"/2025/07/workflows-everywhere-pt.-1/","section":"post","summary":"Introduction Workflows are everywhere. From CI/CD pipelines, all system / data integration to business process automation. It wouldn’t be too far-fetched to say that even modern software build tools like make, maven or npm are in their own way workflow engines.\nThere are countless tools out there that help people define, execute and monitor workflows varying from simple no-code tools to complex frameworks that allow developers to define workflows in code, or even architect their software as workflows.\n","tags":null,"title":"Workflows everywhere pt. 1","type":"post"},{"authors":null,"categories":["development"],"content":"Signing Git Commits I recently had a discussion with two fellow engineers about secure coding practices. After the discussion I realized that I am neglecting one of the most important practices: signing my commits.\nThere are tons of articles on the internet explaining why and how. These are my notes on the subject that I decided to publish.\nThese notes actually use literate programming so they are a mix of notes and code you can actually use via org-mode. You can find the actual notes file here.\nWhy bother? Signing commits, allows you to track who made the commit and that the commit has not been tampered with. More specifically, it allows you to verify that commit is signed using either a GPG key or an SSH key.\nDoes this protect you in the case your Github account get’s compromised? No, it does not as the attacker most likely will change the signing key. Still, it verifies that the commit was not signed by your key. If you are using GPG, where keys are public, it also allows others to verify that the commit was signed by you.\nUsing GPG to sign commits Let’s see how we can use GPG to sign commits.\nExtracting the GPG Key ID First, we need to extract the key ID of the GPG key we want to use.\n1 gpg --list-keys \u0026#34;iocanel@gmail.com\u0026#34; | grep -v pub | grep -v sub | grep -v uid | xargs The sections below will use `$KEY_ID` to refer to the actual value.\nConfigure Git to Use Your GPG Key Extract the key ID and use it to configure Git:\n1 2 3 4 git config --global commit.gpgsign true git config --global gpg.program gpg git config --global gpg.format openpgp git config --global user.signingkey $KEY_ID Export GPG Public Key for GitHub Export your public key in ASCII-armored format for GitHub:\n1 gpg --armor --export $KEY_ID Add GPG Key to GitHub There are two ways of dealing with it:\nManualy Using the Github API Add it manually to the Github settings page\nGo to https://github.com/settings/keys and manually add it.\nUse gh and the Github API\nEnable api access to GPG\n1 gh auth refresh -h github.com -s admin:gpg_key Add the Key using gh and the API\n1 2 3 gpg --armor --export iocanel@gmail.com \u0026gt; /tmp/publickey.asc gh api --method POST -H \u0026#34;Accept: application/vnd.github+json\u0026#34; /user/gpg_keys -f armored_public_key=\u0026#34;$(cat /tmp/publickey.asc)\u0026#34; rm /tmp/publickey.asc Using SSH to sign commits Generate a new SSH key 1 2 3 4 git config commit.gpgsign true git config gpg.format ssh git config gpg.ssh.program ssh-keygen git config user.signingkey /home/iocanel/.ssh/id_rsa Add SSH Singing Key to GitHub Again, there are two ways of dealing with it (as with GPG):\nManualy Using the Github API Add it manually to the Github settings page\nGo to https://github.com/settings/keys and manually add it.\nUse gh and the Github API\nEnable api access to SSH signing keys\n1 gh auth refresh -h github.com -s admin:ssh_signing_key Add the Key using gh and the API\n1 gh api -X POST -H \u0026#34;Accept: application/vnd.github+json\u0026#34; /user/ssh_signing_keys -f key=\u0026#34;$(cat ~/.ssh/id_rsa.pub)\u0026#34; -f title=\u0026#34;My SSH signing key\u0026#34; GPG or SSH? So, which one should you use?\nI like the idea of using GPG keys for signing commits, due to the fact that they are public and can be used to verify the commit. SSH in some scenarios and some integration might be more convenient.\n","date":1743552e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"410ce844357b4e8171d6f4d5ff4655cc","permalink":"https://iocanel.com/2025/04/secure-programming-practices-signed-commits/","publishdate":"2025-04-02T00:00:00Z","relpermalink":"/2025/04/secure-programming-practices-signed-commits/","section":"post","summary":"Signing Git Commits I recently had a discussion with two fellow engineers about secure coding practices. After the discussion I realized that I am neglecting one of the most important practices: signing my commits.\nThere are tons of articles on the internet explaining why and how. These are my notes on the subject that I decided to publish.\nThese notes actually use literate programming so they are a mix of notes and code you can actually use via org-mode. You can find the actual notes file here.\n","tags":["security","git","gpg","ssh"],"title":"Secure programming practices - Signed commits","type":"post"},{"authors":null,"categories":null,"content":"Intro It seems that everyone is an MCP guru these days.\nI am not.\nIn fact, I know almost nothing about it. I am just aware of the concept.\nThis post describes the steps I took in order to create an MCP from scratch, resulting in the project created at:\nhttps://github.com/iocanel/backstage-mcp I also recorded my journey starting from scatch with almost zero knowledge on the topic and using the Quarkus Blog as a guide:\nFull version This is me having no idea what I am doing and spending 1.5 hours trying to figure it out.\nShorter version A reshoot of the first video, so that I can come up with something shorter.\nWhat is MCP? MCP stands for Model Context Protocol. It’s the protocol that tools like Goose, an interactive AI shell, use to talk to plugins.\nThe idea is simple:\nThe AI shell sends JSON messages over stdin. The plugin processes them and sends responses over stdout. If you’ve ever written a language server or CLI plugin using stdio, this will feel familiar.\nIn this case, the plugin acts as a bridge between Goose and Backstage, using Quarkus Backstage\nThe goal I wanted to:\nList available Backstage templates from Goose. Instantiate a template using parameters from a YAML file. And I wanted to do this using:\nQuarkus as the backend, Backstage as the API target, MCP as the communication protocol, Quarkus Backstage and Quarkus MCP Server extensions to simplify things. Anatomy of the project Dependencies The project uses two main Quarkus extensions:\n1 2 3 4 5 6 7 8 9 10 11 \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.quarkiverse.mcp\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;quarkus-mcp-server-stdio\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.0.Alpha5\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;io.quarkiverse.backstage\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;quarkus-backstage\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.4.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; The first implements an MCP server using stdin/stdout. The second talks to the Backstage API.\nImplementation The actual implementation lives in a single Java class. It defines the logic for handling incoming MCP requests. Right now, it supports:\nListing templates Instantiating a template using values from a YAML file The actual code:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 package org.acme; import java.util.List; import io.quarkiverse.backstage.client.BackstageClient; import io.quarkiverse.backstage.common.utils.Serialization; import io.quarkiverse.mcp.server.Tool; import io.quarkiverse.mcp.server.ToolArg; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import java.util.Map; import java.nio.file.Path; import com.fasterxml.jackson.core.type.TypeReference; @ApplicationScoped public class Backstage { @Inject BackstageClient client; @Tool(description = \u0026#34;List backstage templates\u0026#34;) public List\u0026lt;String\u0026gt; listTemplates() { return client.entities().list(\u0026#34;kind=template\u0026#34;).stream().map(e -\u0026gt; e.getMetadata().getName()).toList(); } @Tool(description = \u0026#34;Create a backstage project using a template\u0026#34;) public String createProject(@ToolArg(description = \u0026#34;Template name\u0026#34;) String templateName, @ToolArg(description = \u0026#34;Path to parameters file\u0026#34;) String valuesFile) { Map\u0026lt;String, Object\u0026gt; values = Serialization.unmarshal(Path.of(valuesFile).toFile(), new TypeReference\u0026lt;Map\u0026lt;String, Object\u0026gt;\u0026gt;() {}); return client.templates().withName(templateName).instantiate(values); } } That’s it. Minimal and focused.\nBackstage setup To allow the MCP plugin to talk to your Backstage instance, make sure app-config.yaml has a service-to-service token configured like this:\n1 2 3 4 5 6 7 backend: auth: externalAccess: - type: static options: token: \u0026lt;put your token here\u0026gt; subject: curl-requests That token will be used by the plugin to authenticate against the Backstage API.\nGoose integration Goose can be configured to use this MCP plugin via config.yaml:\n1 2 3 4 5 6 7 8 9 10 11 quarkus-backstage-mcp: name: quarkus-backstage-mcp enabled: true type: stdio cmd: java args: - -jar - /path/to/demo/backstage-mcp/target/quarkus-app/quarkus-run.jar envs: QUARKUS_BACKSTAGE_URL: \u0026lt;url to backstage instance\u0026gt; QUARKUS_BACKSTAGE_TOKEN: \u0026lt;bakcstage service to service token\u0026gt; Alternatively, you could launch the jar directly using Java or through your favorite launch tool.\nExample prompts Once everything is wired up, you can interact with Backstage through Goose:\nList available templates 1 list all the available backstage templates Instantiate a template First, extract the default values:\n1 quarkus backstage template info --show-default-values \u0026lt;template-name\u0026gt; \u0026gt; values.yaml Then prompt Goose:\n1 create a new project from template \u0026lt;template-name\u0026gt; using values from values.yaml The plugin takes care of everything: parsing, calling the API, and responding over stdout.\nReflections It was much easier than I inital thought. Today, I managed two record two videos on the subject, create a github project and write a blog about it.\nI find the result pretty impressive and I love the fact that I can allow tools like goose instantly gain access to the tools I’ve been …","date":1742601600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"4d574c6c79669eba356374f937b25134","permalink":"https://iocanel.com/2025/03/creating-an-mcp-server-with-quarkus-and-backstage/","publishdate":"2025-03-22T00:00:00Z","relpermalink":"/2025/03/creating-an-mcp-server-with-quarkus-and-backstage/","section":"post","summary":"Intro It seems that everyone is an MCP guru these days.\nI am not.\nIn fact, I know almost nothing about it. I am just aware of the concept.\nThis post describes the steps I took in order to create an MCP from scratch, resulting in the project created at:\nhttps://github.com/iocanel/backstage-mcp I also recorded my journey starting from scatch with almost zero knowledge on the topic and using the Quarkus Blog as a guide:\n","tags":null,"title":"Creating an MCP Server with Quarkus and Backstage","type":"post"},{"authors":null,"categories":["emacs"],"content":"Using ChatGPT via gptel to make my Emacs nutrition tracker smarter Introduction Back in April 2020 I shared how I built a nutrition tracker in Emacs that leveraged org-capture templates and or-ql to record foods, recipes, and meals. At that time, I relied on an org-mode based database and manual updates to keep track of calories, protein, carbs, and fat. While the system worked, maintaining that data was both tedious and error-prone. Each time I needed to insert a new food, I had to do an internet search to find the nutritional information and then manually update my org-mode files.\nRecently, I discovered gptel which allows Emacs users to easily integrate with ChatGPT or other LLMs. So, I couldn’t resist the opportunity to use it to smarten up nutrition tracker by integrating it with LLMs so that it can fetch nutritional information for me. The goal is to retain the previously used templates, but add a post processing mechanism that will kick in when a new food entry is captured but is missing the nuttritional information.\nA video walkthrough that walks through the this post can be found here:\nCreating a function to get nutritional information from ChatGPT The first thing that we are going to need is a new function that given a food and its quantity, will query ChatGPT via GPTel for all nutrients in a FOOD item with a given QUANTITY. The function will return a map of nutrients to their values.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 (defun ic/nutrients-get (food quantity) \u0026#34;Query ChatGPT via GPTel for all nutrients in a FOOD item with a given QUANTITY. Returns a map of nutrients to their values.\u0026#34; (if (or (not food) (string-empty-p food)) (make-hash-table) ;; Return an empty map if food is nil or empty (let* ((quantity (or quantity \u0026#34;1 serving\u0026#34;)) (prompt (format \u0026#34;Provide the nutritional values (calories, protein, carbs, fat) for %s in %s. Only return a JSON object with the keys \u0026#39;calories\u0026#39;, \u0026#39;protein\u0026#39;, \u0026#39;carbs\u0026#39;, and \u0026#39;fat\u0026#39;, and their numeric values.\u0026#34; food quantity)) (response (if (fboundp \u0026#39;gptel-request) (let ((response \u0026#34;\u0026#34;)) (gptel-request prompt :callback (lambda (resp \u0026amp;rest _) (setq response (replace-regexp-in-string \u0026#34;^```json\\\\|```$\u0026#34; \u0026#34;\u0026#34; resp)) (message \u0026#34;Response: %s\u0026#34; response))) (while (string-empty-p response) (sleep-for 0.1)) response) \u0026#34;{}\u0026#34;))) (condition-case nil (json-read-from-string response) (error (progn (message \u0026#34;Error parsing JSON response\u0026#34;) nil)))))) Next stop is to create a function that goes to the current org-mode heading, calls the function above to get the nutrients, and then updates the properties of the heading with the nutritional information.\nCreating a function that post processes captured food entries 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 (defun ic/post-process-nutrition-food-entry () \u0026#34;Calculate nutrition values for the last captured Org entry and update the table. Only query for nutrients if user input is blank.\u0026#34; (save-excursion ;; Safely check for heading. If there\u0026#39;s no heading, do nothing. (condition-case nil (progn (org-back-to-heading t) ; throws an error if no heading above point (let* ((food (org-get-heading t t t t)) ;; Dynamically get the heading as the food name (unit (or (org-entry-get nil \u0026#34;UNIT\u0026#34;) \u0026#34;unit\u0026#34;)) ;; Default to \u0026#34;unit\u0026#34; (quantity (or (org-entry-get nil \u0026#34;QUANTITY\u0026#34;) \u0026#34;1\u0026#34;)) ;; Default to \u0026#34;1\u0026#34; (nutrients (ic/nutrients-get food (format \u0026#34;%s %s\u0026#34; quantity unit))) (calories (or (ic/string-trim (org-entry-get nil \u0026#34;CALORIES\u0026#34;)) (format \u0026#34;%s\u0026#34; (alist-get \u0026#39;calories nutrients)))) (protein (or (ic/string-trim (org-entry-get nil \u0026#34;PROTEIN\u0026#34;)) (format \u0026#34;%s\u0026#34; (alist-get \u0026#39;protein nutrients)))) (carbs (or (ic/string-trim (org-entry-get nil \u0026#34;CARBS\u0026#34;)) (format \u0026#34;%s\u0026#34; (alist-get \u0026#39;carbs nutrients)))) (fat (or (ic/string-trim (org-entry-get nil \u0026#34;FAT\u0026#34;)) (format \u0026#34;%s\u0026#34; (alist-get \u0026#39;fat nutrients))))) ;; Log debug information for troubleshooting (message \u0026#34;%s\u0026#34; (prin1-to-string nutrients)) (message \u0026#34;Setting properties: calories: %s, protein: %s, carbs: %s, fat: %s\u0026#34; calories protein carbs fat) ;; Update properties (when calories (org-set-property \u0026#34;CALORIES\u0026#34; calories)) (when protein (org-set-property \u0026#34;PROTEIN\u0026#34; protein)) (when carbs (org-set-property \u0026#34;CARBS\u0026#34; carbs)) (when fat (org-set-property \u0026#34;FAT\u0026#34; fat)) ;; Update the table below the entry (let ((found-table (re-search-forward \u0026#34;TBLNAME\u0026#34; nil t))) (if found-table (progn (message \u0026#34;Table found, updating values...\u0026#34;) (org-table-goto-line 2) (org-table-put 2 4 (or quantity \u0026#34;1\u0026#34;)) ;; Update quantity (org-table-put 2 5 (or calories \u0026#34;0\u0026#34;)) ;; Update calories (org-table-put 2 6 (or protein \u0026#34;0\u0026#34;)) ;; Update protein (org-table-put 2 7 (or carbs \u0026#34;0\u0026#34;)) ;; Update carbs (org-table-put 2 8 (or fat \u0026#34;0\u0026#34;)) ;; Update fat (org-table-recalculate \u0026#39;all) (org-table-align)) (message \u0026#34;No table found below entry.\u0026#34;))))) ;; If `org-back-to-heading` fails, we skip the whole update. (error (message \u0026#34;No heading found; skipping nutrition update.\u0026#34;))))) Registering the post processing …","date":1739686740,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"8c024bbeb751fdb2ff1e502535d1321a","permalink":"https://iocanel.com/2025/02/using-chatgpt-via-gptel-to-make-my-emacs-nutrition-tracker-smarter/","publishdate":"2025-02-16T09:19:00+03:00","relpermalink":"/2025/02/using-chatgpt-via-gptel-to-make-my-emacs-nutrition-tracker-smarter/","section":"post","summary":"Using ChatGPT via gptel to make my Emacs nutrition tracker smarter Introduction Back in April 2020 I shared how I built a nutrition tracker in Emacs that leveraged org-capture templates and or-ql to record foods, recipes, and meals. At that time, I relied on an org-mode based database and manual updates to keep track of calories, protein, carbs, and fat. While the system worked, maintaining that data was both tedious and error-prone. Each time I needed to insert a new food, I had to do an internet search to find the nutritional information and then manually update my org-mode files.\n","tags":["emacs","chatgpt","gptel","ai","nutrition tracker"],"title":"Using ChatGPT via gptel to make my Emacs nutrition tracker smarter","type":"post"},{"authors":null,"categories":["development"],"content":"Intro A couple of weeks ago I came across Roberto Carratalla’s blog post on Function calling on OpenShift AI. At the time I was preparing for RedHat Summit Connect Zurich 2025 where I was meant to run a workshop on Quarkus and Langchain4j with Kyra Goud, Dimitris Andreadis and Codrin Bucur. We had an issue however, related to enabling functions on OpenShift AI.\nRoberto pointed us to the blog post, but I couldn’t spot what I was doing wrong. So, I decided to port the examples in the blog post to Java to make sure that I was comparing apples to apples.\nThis post is a step by step guide on how to port the DuckDuckGo example from Python to Java with Quarkus and Langchain4j.\nA video of me porting the example to Quarkus and Langchain4j can be found at:\nPorting the DuckDuckGo example from Python to Java with Quarkus and Langchain4j.\nThe original example The original example is written in Python and it’s pretty straightforward.\nCreate the chat model It creates and configures an instance of a chat-based language model.\n1 2 3 4 5 6 7 8 9 10 11 12 # LLM definition llm = ChatOpenAI( openai_api_key=API_KEY, openai_api_base= f\u0026#34;{INFERENCE_SERVER_URL}/v1\u0026#34;, model_name=MODEL_NAME, top_p=0.92, temperature=0.01, max_tokens=512, presence_penalty=1.03, streaming=True, callbacks=[StreamingStdOutCallbackHandler()] ) Connect tools It then creates a tool that delegates to the duckduckgo search library.\n1 2 3 from langchain_community.tools import DuckDuckGoSearchRun llm_with_tools = llm.bind_tools([DuckDuckGoSearchRun], tool_choice=\u0026#34;auto\u0026#34;) Call the LLM Next stop is to actually prompt the user for a query and send it over to the LLM, letting it know what the available tools are.\n1 2 3 4 5 query = \u0026#34;Search what is the latest version of OpenShift?\u0026#34; messages = [HumanMessage(query)] ai_msg = llm_with_tools.invoke(messages) print(ai_msg.tool_calls) messages.append(ai_msg) Perform the actual calls Once we get the response from the LLM, we can perform the actual calls to the tools.\n1 2 3 4 for tool_call in ai_msg.tool_calls: selected_tool = {\u0026#34;duckduckgo_search\u0026#34;: duckduckgo_search}[tool_call[\u0026#34;name\u0026#34;].lower()] tool_msg = selected_tool.invoke(tool_call) messages.append(tool_msg) Pass the tool response back to the LLM Finally, we pass the tool response back to the LLM.\n1 llm_with_tools.invoke(messages) Porting the example to Quarkus and Langchain4j Create a client for DuckDuckGo The python example used the duckduckgo search tool from the langchain_community library. Not sure if there is a Java equivalent, I decided to create a client for DuckDuckGo using Rest Client Jackson. I’ll just need an `interface` that defines a search method that corresponds to an http get request to `/q={query}\u0026amp;format=json`.\nThe `@RegisterRestClient` annotation is used to register the client with the Quarkus runtime. The `configKey` attribute is used to specify the configuration key that will be used to configure the client. I can the set the URL of the DuckDuckGo API in the `application.properties` file, or alternatively pass it directly through the `@RegisterRestClient` annotation (via the `baseUri` attribute).\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 package org.acme; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import jakarta.enterprise.context.ApplicationScoped; import jakarta.ws.rs.Consumes; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.MediaType; @ApplicationScoped @RegisterRestClient(configKey = \u0026#34;duckduckgo\u0026#34;) public interface SearchClient { @GET @Path(\u0026#34;/q={query}\u0026amp;format=json\u0026#34;) @Consumes(MediaType.APPLICATION_JSON) String search(String query); } (Optional) Create a service that uses the client I will wrap this in a service that will be used by the Langchain4j tool. I am doing this just to have a place to add logging or other control logic if needed. I could have used the client directly in the tool.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 package org.acme; import org.eclipse.microprofile.rest.client.inject.RestClient; import dev.langchain4j.agent.tool.Tool; import io.quarkus.logging.Log; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; @ApplicationScoped public class SearchService { @Inject @RestClient SearchClient searchClient; @Tool(\u0026#34;Perform internet search using DuckDuckGo\u0026#34;) public String search(String query) { Log.info(\u0026#34;Search query: \u0026#34; + query); return searchClient.search(query); } } Define the AI Service 1 2 3 4 5 6 7 8 9 10 11 package org.acme; import io.quarkiverse.langchain4j.RegisterAiService; import jakarta.enterprise.context.ApplicationScoped; @RegisterAiService @ApplicationScoped public interface AiService { String search(String query); } Call the Service from the CLI Last step is to create a CLI command for that reads the query from the user and calls the service.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 package org.acme; import io.quarkus.picocli.runtime.annotations.TopCommand; import jakarta.inject.Inject; import …","date":1739484e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"f6d80b587a842b2676ac424212a550b4","permalink":"https://iocanel.com/2025/02/porting-the-duckduckgo-example-from-python-to-java-with-quarkus-and-langchain4j/","publishdate":"2025-02-14T00:00:00+02:00","relpermalink":"/2025/02/porting-the-duckduckgo-example-from-python-to-java-with-quarkus-and-langchain4j/","section":"post","summary":"Intro A couple of weeks ago I came across Roberto Carratalla’s blog post on Function calling on OpenShift AI. At the time I was preparing for RedHat Summit Connect Zurich 2025 where I was meant to run a workshop on Quarkus and Langchain4j with Kyra Goud, Dimitris Andreadis and Codrin Bucur. We had an issue however, related to enabling functions on OpenShift AI.\nRoberto pointed us to the blog post, but I couldn’t spot what I was doing wrong. So, I decided to port the examples in the blog post to Java to make sure that I was comparing apples to apples.\n","tags":["java","quarkus","langchain4j","ai"],"title":"Porting the DuckDuckGo example from Python to Java with Quarkus and Langchain4j","type":"post"},{"authors":null,"categories":["development"],"content":"Introduction This year I decided to put some personal time in learning reactjs. While I enjoy using Javascript for the frontend, I’d say that it’s not the language of choice for me for backend use. I don’t have anything against nodejs, but I prefer use java frameworks, which is the focus of my day job. So, I wanted to combine reactjs with Quarkus. A combination that just became more fun with the Quinoa. Quinoa allows users to use their favorite javascript framework with Quarkus with no additional configuration. On top of that it allows development of both backend and fronted from via the Quarkus dev mode. Last but not least it allows for native compilation that produces a single binary containing both fronted and backend.\nThis post demonstrates development using Quinoa. To make things more interesting it adds security into the mix. In particular it uses Keycloak as an identity provider and shows how frontend and backend can exchange information is a secure way.\nThe end result is something like:\nThe full project can be found at: https://github.com/iocanel/quarkus-react-keycloak\nCredits: The following demo https://github.com/dasniko/keycloak-reactjs-demo by Niko Kobler was really helpful and influencial for this post.\nChallenges The main challenges that such a setup faces and addressed in this blog are:\nStarting Keycloak dev service configured with a public client (for frontend use). Keycloak discovery from the frontend. Frontend/backend communication using the token obtained from Keycloak Getting started with Quinoa To create an empty Quarkus project using the Quinoa extension:\n1 2 3 4 mkdir -p ~/demo cd ~/demo quarkus create app quarkus-react-keycloak -x=io.quarkiverse.quinoa:quarkus-quinoa cd quarkus-react-keycloak The generated project has the structure shown below. It’s a regular Quarkus application with the addition of `webui` under `src/main`:\nThe `webui` folder is a traditional Javascript project.\nAdd the extensions for keycloak To be able to access the Keycloak Dev Services, we’ll need the following extension.\n1 2 quarkus ext add -B oidc quarkus ext add -B keycloak-authorization Now, by running the application in dev mode, the Keycloak Dev Service is automatically started:\n1 ./mvnw quarkus:dev Exposing Keycloak to the fronted The Dev Service for Keycloak passes as properties all configuration information needed for interaction with the service. The fronted however, is completely unaware. Let’s expose the information via rest. So, let’s add the `quarkus-resteasy-reactive-jackson` extension:\n1 quarkus ext add -B resteasy-reactive-jackson Now, let’s add a resource that exposes Keycloak information. The info we need to expose is:\nThe url The realm The clientId 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 package org.acme; public class KeycloakInfo { private final String url; private final String realm; private final String clientId; public KeycloakInfo(String url, String realm, String clientId) { this.url = url; this.realm = realm; this.clientId = clientId; } public String getUrl() { return url; } public String getRealm() { return realm; } public String getClientId() { return clientId; } } In theory we can live with just the `url` (it’s the only thing that is dynamic) and hardcode the rest in fronted. Let’s create a rest resource that uses the path `/api/keycloak/info.json` to expose the `Keycloak` info. That resource obtains the `url`, and `clientId` from the configuration populated by the Dev Service using the `@ConfigurationProperty` annotation.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 package org.acme; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\u0026#34;/api/keycloak\u0026#34;) public class KeycloakResource { @ConfigProperty(name=\u0026#34;keycloak.url\u0026#34;) String keycloakUrl; String realm=\u0026#34;quarkus\u0026#34;; @ConfigProperty(name=\u0026#34;quarkus.oidc.client-id\u0026#34;) String clientId; @GET @Path(\u0026#34;info.json\u0026#34;) @Produces(MediaType.APPLICATION_JSON) public KeycloakInfo getInfo() { return new KeycloakInfo(keycloakUrl, realm, clientId); } } By accessing http://localhost:8080/api/keycloak/info.json you get something similar to:\n1 2 3 4 5 { url: \u0026#34;http://localhost:43749\u0026#34;, realm: \u0026#34;quarkus\u0026#34;, clientId: \u0026#34;quarkus-app\u0026#34; } Before demonstrating what exactly we are going to do with this blob of json in the frontend, let’s configure which paths served by the front and which by the backend.\nSingle application routing To configure the routes served by the backend, let’s set `quarkus.quinoa.enable-spa-routing` to true.\n1 echo \u0026#34;quarkus.quinoa.enable-spa-routing=true\u0026#34; \u0026gt;\u0026gt; src/main/resources/application.properties Setup the react application Let’s just delete the `webui` folder for now and create a new reactjs application from scratch:\n1 2 3 4 rm -r src/main/webui cd src/main yarn create react-app webui cd webui An alternative is to use a react template. I usually go with …","date":1666006560,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"88c0c0ed26d4c903865e3be79c13f14f","permalink":"https://iocanel.com/2022/10/quarkus-react-with-quinoa-and-keycloak/","publishdate":"2022-10-17T14:36:00+03:00","relpermalink":"/2022/10/quarkus-react-with-quinoa-and-keycloak/","section":"post","summary":"Introduction This year I decided to put some personal time in learning reactjs. While I enjoy using Javascript for the frontend, I’d say that it’s not the language of choice for me for backend use. I don’t have anything against nodejs, but I prefer use java frameworks, which is the focus of my day job. So, I wanted to combine reactjs with Quarkus. A combination that just became more fun with the Quinoa. Quinoa allows users to use their favorite javascript framework with Quarkus with no additional configuration. On top of that it allows development of both backend and fronted from via the Quarkus dev mode. Last but not least it allows for native compilation that produces a single binary containing both fronted and backend.\n","tags":["quarkus","quinoa","keycloak","react"],"title":"Quarkus React with Quinoa and Keycloak","type":"post"},{"authors":null,"categories":["hobbies"],"content":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nIn this post I am going to discuss about flowcharts and more specifically about:\nwhy bother with flowcharts tools for creating flowcharts integrating flowcharts with with wiki What is a flowchart ? A flowchart is a diagram of the sequences of movements or actions of people or things involved in a complex system or activity.\nIn the sport of Jiu Jitsu the `activity` may be a technique or a series of techniques one needs to perform to either submit the oppoent or to get into an advantageous position. In other words a flowchart a graphical representation of the steps that constitue one or more techniques.\nWhy use flowcharts ? Visualization helps the process of learning and also helps the brain retain information. No wonder why people who exibit impressive memory skills often use visualization based techniques like the `Memory Palace` etc.\nFor material that is already known, using flowcharts really helps refreshing ones memory, as it’s much faster than going through the original material. It’s also something that one can easily print, add notes on top of it and so on.\nFor new material, creating flowcharts assists comprehension and reinforces learning.\nLast but not least, flowcharts can act as an index that can help you to easily navigate to a `step` of interest.\nFlowcharting tools There are tons of flowcharting tools out there. I am interested only in tools that define a domain specific language mostly because we can use scripts to generate them (or parts of them). WYSIWYG (what you see is what you get) tools might be more appealing to some users, but apparently these people are not my target audience.\nOther qualities of a flowcharting tool includes:\nease of use verbosity quality of feedback (error messages) integrations editor support web / wiki support This post is going to focus on three of the most popular choices out there:\nPlantUML yuml flowchart.js PlantUML PlantUML is a component that allows users to easily create UML diagrams. UML is a modeling language used in software engineering and one of the diagrams it uses is the activity diagram, which is pretty much a flowchart.\nPlantUML uses a client/server architecture, so it usually requires internet access. Usually? Well, it allows you to run the servre locally too (without much hussle).\nThe tool has integration with tons of tools and services and is generally a solid choice.\nHere’s an example diagram for closed guard:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 @startuml start repeat :closed guard; switch ( opponents posture ) case ( straight ) if (hip bump) then (yes) :Mount; end else if (kimura) then (yes) :Submission; end else if (guillotine) then (yes) :Submission; end endif case ( balanced ) fork :Kuzushi with knees; :Two on one arm drag; end merge case ( forward ) if (Underhook) then (yes) :Grab armpit / lapel; :Bring opposite knee to the floor; :Free hip; :Take the back; end else if (Overhook) then (yes) endif endswitch repeat while (check posture) end @enduml #+RESULTS I have been using PlantUML a lot for creating BJJ related flowcharts and my only complaint is its verbosity. Especially, for non-developers it might seem a bit too much.\nyuml yuml is pretty similar to PlantUML with less verbose syntax. In fact, it completely lacks keywords and only uses symbols. So, in a sense it feels like creating the diagram in ascii. It’s also supported out of the box in mdwiki using mdwiki gimmicks.\nIt also reqires internet access as the rendering happens by their online server. One downside compared to the competition is that I didn’t find a way to include clickable parts inside the generated graph. This seems to be an option in the other two tools.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (start)-(closed guard)-\u0026gt;(check posture)-\u0026gt;\u0026lt;p\u0026gt; \u0026lt;p\u0026gt;[straigt]-\u0026gt;(hip bump)-\u0026gt;\u0026lt;h\u0026gt; \u0026lt;h\u0026gt;[yes]-\u0026gt;(mount) \u0026lt;h\u0026gt;[no]-\u0026gt;(kimura)-\u0026gt;\u0026lt;k\u0026gt; \u0026lt;k\u0026gt;[yes]-\u0026gt;(submission) \u0026lt;k\u0026gt;[no]-\u0026gt;(guillotine)-\u0026gt;\u0026lt;g\u0026gt; \u0026lt;g\u0026gt;[yes]-\u0026gt;(submission) \u0026lt;g\u0026gt;[no]-\u0026gt;(check posture) (mount)-\u0026gt;(end) (submission)-\u0026gt;(end) \u0026lt;p\u0026gt;[balanced]-\u0026gt;(kuzushi with knees)-\u0026gt;(two on one arm drag)-\u0026gt;(check posture) \u0026lt;p\u0026gt;[forward]-\u0026gt;(underhook)-\u0026gt;\u0026lt;u\u0026gt; \u0026lt;u\u0026gt;[yes]-\u0026gt;(grab armpit / lapel)-\u0026gt;(bring knee to the floor)-\u0026gt;(free hip)-\u0026gt;(take the back) (take the back)-\u0026gt;(end) \u0026lt;u\u0026gt;[no]-\u0026gt;(overhook)-\u0026gt;(check posture) Much simpler to write, but the diagram itself does not look as tidy as the previous one.\nFlowchart JS The last contender is flowchart.js. This project focuses exclusively on flowcharts instead of UML (as was the case for the previous tools).\nSyntax wise is similar to yuml, however, it does require you addtionally define the type and content of each node in the graph.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 st=\u0026gt;start: Start …","date":1641938160,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"e3d9a0e117049a0e34018e43149fb1dd","permalink":"https://iocanel.com/2022/01/hackers-guide-to-jiu-jitsu-flowcharts/","publishdate":"2022-01-11T23:56:00+02:00","relpermalink":"/2022/01/hackers-guide-to-jiu-jitsu-flowcharts/","section":"post","summary":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nIn this post I am going to discuss about flowcharts and more specifically about:\nwhy bother with flowcharts tools for creating flowcharts integrating flowcharts with with wiki What is a flowchart ? A flowchart is a diagram of the sequences of movements or actions of people or things involved in a complex system or activity.\n","tags":null,"title":"Hackers guide to Jiu Jitsu: Flowcharts","type":"post"},{"authors":null,"categories":["development"],"content":"Introduction Kubernetes is around for almost 7 years now! Ever since the beggining there have been efforts to make consuming / binding to services simpler. And while discovering the actual service is not so much of an issue (if you employ a set of conventions), getting the credentials etc is slightly trickier.\nThe Service Catalog has been an effort that promised to simplify provisioning and binding to services, but it seems that it has lost its momentum. The lack of uniformity between providers, the differences in how each service communicated the binding information and the fact that people tend to favor operators for provisioning services made it pretty hard to use in practice.\nThe Service Binding Operator is a more recent and modern initiative. It stays out of the way of service provisioning (leaving that to operators) and focuses on how to best communicate the binding information to the application. An interesting part of the specification is the workload projection, which defines a directory structure that will be mounted to the application container when the binding happens in order to pass all the required binding information:\ntype uri credentials Other parts of the specification are related to the `ServiceBinding` resource (which controls what services are bound to which application and how).\nQuarkus already supports the workload projection part of the spec and recently received enhancments on the binding part, which is going to be the focus of this post. In particular this post is going to discuss how the `ServiceBinding` can be automatically genenerated for the user and will walk you through the whole process from installing the needed operators to configuring and deploying the application.\nFor the shake of this post we are going to use kind install the Service Binding Operator and the Crunchy data operator for Postgres. Then, we are going to create a postgres cluster and finally we will create a simple todo application, deploy and bind it to the provisioned postgres.\nStart a new kind cluster If you’ve already created one, or don’t use kind at all, feel free to skip.\n1 kind create cluster Install the OLM Both operators that will be installed in this post, will be installed through the Operatorhub. So, the first step is to install the Operator Lifecycle Manager.\n1 curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.19.1/install.sh | bash -s v0.19.1 Install the Service Binding Operator 1 kubectl create -f https://operatorhub.io/install/service-binding-operator.yaml To verify the installation execute the following command.\n1 kubectl get csv -n operators -w When the `phase` of the Service Binding Operator is `Succeeded` you may proceed to the next step.\nInstall the Postgres Crunchy Operator 1 kubectl create -f https://operatorhub.io/install/postgresql.yaml As above to verify the installation execute:\n1 kubectl get csv -n operators -w When the `phase` of the operator is `Succeeded` you may proceed to the next step.\nCreate a Postgres cluster We shall create a new namespace, where we will install our cluster and application:\n1 2 kubectl create ns demo kubectl config set-context --current --namespace=demo To create the cluster we need to apply the following custom resource:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: pg-cluster namespace: demo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0 postgresVersion: 13 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \u0026#34;ReadWriteOnce\u0026#34; resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.33-2 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \u0026#34;ReadWriteOnce\u0026#34; resources: requests: storage: 1Gi - name: repo2 volume: volumeClaimSpec: accessModes: - \u0026#34;ReadWriteOnce\u0026#34; resources: requests: storage: 1Gi proxy: pgBouncer: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:centos8-1.15-2 This resource has been borrowed from Service Binding Operator Quickstart, which is definitely something worth looking into (if you haven’t already).\nLet’s save that file under `pg-cluster.yml` and apply it using `kubectl`\n1 kubectl apply -f ~/pg-cluster.yml Let’s check the pods to verify the installation:\n1 kubectl get pods -n demo Create a Quarkus application that will bind to Postgres The application we are going to create is going to be a simple `todo` application that will connect to postgres via hibernate and panache.\nThe application that we will create is heavily inspired by Clement Escoffier’s Quarkus TODO app, but will focus less on the presentation and more on the binding aspect.\nWe will generate the application using the following maven command.\n1 2 3 4 mkdir -p ~/demo cd ~/demo mvn …","date":1638201360,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"e646f93a1b4af9b4886ef2c6b4d4cc53","permalink":"https://iocanel.com/2021/11/using-quarkus-with-the-service-binding-operator/","publishdate":"2021-11-29T17:56:00+02:00","relpermalink":"/2021/11/using-quarkus-with-the-service-binding-operator/","section":"post","summary":"Introduction Kubernetes is around for almost 7 years now! Ever since the beggining there have been efforts to make consuming / binding to services simpler. And while discovering the actual service is not so much of an issue (if you employ a set of conventions), getting the credentials etc is slightly trickier.\nThe Service Catalog has been an effort that promised to simplify provisioning and binding to services, but it seems that it has lost its momentum. The lack of uniformity between providers, the differences in how each service communicated the binding information and the fact that people tend to favor operators for provisioning services made it pretty hard to use in practice.\n","tags":["java","quarkus","kubernetes"],"title":"Using Quarkus with the Service Binding Operator","type":"post"},{"authors":null,"categories":["devops"],"content":"Introduction I was experimenting with some Github Actions that needed to make use of Mandrel so, I thought that I should use sdkman. I run into some issues though and I thought I should document the experience\nThe main issue I encountered, is that no matter how I mixed sdkamn into the mix, my steps acted like it was not\nThe sdkamn action It seems that there is a Github Action for sdkman available, which should allow you to manage any `candidate`. I used it like this:\n1 2 3 4 5 - uses: sdkman/sdkman-action@master id: sdkman with: candidate: java version: 21.2.0.0-mandrel But when I proceeded later on to make use of the `native-image` binary it was not there.\nUsing sdkman manually I decided that instead of troubleshooting the Github Action for sdkman, it might be simpler and quicker to manage https://sdkman.io/ myself.\nIt was not!\nThis is what I tried:\n1 2 3 4 5 6 - name: Setup sdkman run: | curl -s \u0026#34;https://get.sdkman.io\u0026#34; | bash source \u0026#34;$HOME/.sdkman/bin/sdkman-init.sh\u0026#34; sdkman_auto_answer=false sdkman_selfupdate_enable=false The effect was similar. The step seemed to work with no issue whatsoever, but when I tried to use later on `native-image` it was not there.\nTrobleshooting When I started added debuging / troubleshooting command in my script, like:\n1 2 which java java --version I realized that it was not using Mandrel at all, but instead a `jdk 11` binary that was found in the `PATH`. The path? Did I say the path?\nBingo! For sdkman to properly work I should find an entry like `$HOME/.sdkman/candidates/java/current/bin` in my `PATH`.\nI didn’t!\nEven worse, the `sdk` binary was also not found in the path !?\nThere is no such thing as an sdk binary! In case, you don’t already know, sdkman is not a binary but an alias that gets initialized by your shell. Usually, it should be initialized by a command like:\n1 [[ -s \u0026#34;$HOME/.sdkman/bin/sdkman-init.sh\u0026#34; ]] \u0026amp;\u0026amp; source \u0026#34;$HOME/.sdkman/bin/sdkman-init.sh\u0026#34; found inside your `.bashrc` or `.zshrc`.\nThe weird part of the story is that after checking what’s inside those files, the sdkman initialization lines where present.\nbashrc is not executed I searched online for `github actions bashrc not executed` The firs result that came back was pretty enlightening. According to https://github.community/t/self-hosted-not-using-bashrc/18358/2:\n`In order for individual steps to make use of the .bashrc, one needs to explictly request it, by setting the default shell options:\n1 2 3 4 name: use bashrc defaults: run: shell: bash -ieo pipefail {0} I added this to my action and things worked like a charm.\nA full exmaple 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 name: Build env: MAVEN_ARGS: -B -e # We need to set these defaults so that .bashrc is called for each step. # This is needed so that sdkman can be properly intialized defaults: run: shell: bash -ieo pipefail {0} on: push: branches: - master pull_request: jobs: build: name: Clojure on Mandrel ${{ matrix.java-version }} with Leiningen ${{ matrix.lein-version }} runs-on: ubuntu-latest strategy: matrix: java-version: [21.2.0.0-mandrel] lein-version: [2.9.7] steps: - name: Checkout uses: actions/checkout@v2.3.4 - name: Setup sdkman run: | curl -s \u0026#34;https://get.sdkman.io\u0026#34; | bash source \u0026#34;$HOME/.sdkman/bin/sdkman-init.sh\u0026#34; sdkman_auto_answer=false sdkman_selfupdate_enable=false - name: Setup java run: | sdk install java ${{matrix.java-version}} sdk default java ${{matrix.java-version}} - name: Setup leiningen run: | sdk install leiningen ${{matrix.lein-version}} sdk default leiningen ${{matrix.lein-version}} - name: Run tests run: lein test - name: Build native image run: | lein native-image The full project can be found at: https://github.com/iotemplates/clojure-cli. As always, I hope this was helpful. See ya!\n","date":1632118800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"387259dd3dbde475bcf704c965b7ed4b","permalink":"https://iocanel.com/2021/09/using-sdkman-in-github-actions/","publishdate":"2021-09-20T09:20:00+03:00","relpermalink":"/2021/09/using-sdkman-in-github-actions/","section":"post","summary":"Introduction I was experimenting with some Github Actions that needed to make use of Mandrel so, I thought that I should use sdkman. I run into some issues though and I thought I should document the experience\nThe main issue I encountered, is that no matter how I mixed sdkamn into the mix, my steps acted like it was not\nThe sdkamn action It seems that there is a Github Action for sdkman available, which should allow you to manage any `candidate`. I used it like this:\n","tags":["sdkman","github"],"title":"Using sdkman in github actions","type":"post"},{"authors":null,"categories":["hints"],"content":"Prologue These are just some personal notes, that I’ll surely forget unless I write them down.\nThe problem As I am bloging for over a decade now and most of the time I am sharing code, I needed of a decent way to highlight my code and make it available to users. In the beginning I was using blogger but later on I migrated to wordpress.\nSo, I needed a syntax highlighting solution similar to what I was using for blogger.\nThe solution A quick search in the internet revealed the Syntax Highlighter Evolved plugin, which I installed a started using.\nMore problems After using the Syntax Highlighter Evolved plugin for a while I realized that it came with a few issues.\nMissing an action toolbar I am sure you’ve seen code blocks that come not only with syntax highlighting but also with a nice toolbar that allows you to copy, print, open in a new tab etc. I needed something like that but it was missing.\nBringing back the toolbar Luckily, I figured that downgrading to version 2.x of the plugin brings the toolbar back. As of now, I see no reason to use 3.x, the old one seems way nicer!\nEscaping of characters The most annoying thing was the code I shared was escaped. All html symbols like `\u0026lt;` and `\u0026gt;` where replace by `lt;` and `gt;` respectively. I searched for a solution and it seems that were plenty available on the internet but I settled with the one below.\nUsing the classical editor Wordpress nowdays comes with many different editors:\nWSIWYG Guttenberg Classical It seems that the best editor to use along with the Syntax Highlighter Evolved plugin is the classic editor.\nOnce the plugin is installed it can be easily enabled for all or specific users. The easiest solution for me was all users.\nEnabling the classic editor is the first step. Next step is to edit the post and fix the broken code blocks (they will not be escaped again). If you are using an external post editor, just publish the post again and you’re golden. For example, I am using Org 2 Blog so I just had to re-publish the broken posts and everything was fixed!\nEpilogue Now, the code blocks seem really nice and ready for use. I hope you found it useful. My future self most cerntaintly will!\n","date":1630489080,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"def9bac0bbbd9ec301f86b55dcee7139","permalink":"https://iocanel.com/2021/09/wordpress-notes-on-syntax-highlighting/","publishdate":"2021-09-01T12:38:00+03:00","relpermalink":"/2021/09/wordpress-notes-on-syntax-highlighting/","section":"post","summary":"Prologue These are just some personal notes, that I’ll surely forget unless I write them down.\nThe problem As I am bloging for over a decade now and most of the time I am sharing code, I needed of a decent way to highlight my code and make it available to users. In the beginning I was using blogger but later on I migrated to wordpress.\nSo, I needed a syntax highlighting solution similar to what I was using for blogger.\n","tags":["wordpress"],"title":"Wordpress: Notes on syntax highlighting","type":"post"},{"authors":null,"categories":["hobbies"],"content":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nIn this section I am going to discuss why and how markdown is the ideal format for using for your notes. I am also going to conver how to use markdown in order to maintain wiki/second brain for your Jiu Jitsu notes.\nWhat is markdown ? Markdown is a lightweight markup language for creating formatted text using a plain-text editor. Formatting includes things like:\nHeaders Bold, italic, underlined text Images Hyperlinks Tables If you know what html is, you can think of markdown as an alternative to html that instead of weird tags, just makes clever use of symbols.\nHere is an example:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 # Heading ## Sub-heading Unordered list: - item 1 - item 2 - item 3 | Syntax | Description | | ----------- | ----------- | | Header | Title | | Paragraph | Text | Why markdown ? It is 100% pure text. No propriatory file formats, no coupling to a particular editor or tool. You can easily edit it from all your devices without the need of any specialized software.\nThis also means that you can easily generate or manipulate it using scripts (cough cough). This is really important because we can easily export information from instructionals directly into markdown.\nFor example, we can generate an animated gif, as demonstrated in previous posts and embed the image into markdown (e.g. see my notes on ‘Double Under’).\nMost importantly, markdown supports links, which is what makes using markdown for building ourselves a second brain (interconnected notes).\nWhat is a wiki ? After searched for a proper definition in multiple wiki pages, I came up with:\n`Wiki is a knowledge base presented as collection of well connected web pages and colaboratively edited.`\nA richer definition can be found in wikipedia: wiki.\nIn this serires of posts we don’t really care about the collaborative part, but more about the edited that implies that a wiki is something living/evolving, that is expected to be edited / updated.\nWhy wiki ? When I first started taking notes on Jiu Jitsu, I used a single text file, were I kept things. As the file grew larger, it was becoming harder and harder to easily jump to a particular note in the file. Also, there were cases were I needed to link notes together …\nThink for a moment Juji gatame (armbar). How does one organize notes on juji gatame?\nDo they go in the attacks from mount section? Do they go in the attacks from closed guard section? Do they go in the flying attacks? Do they go in the escapes from popular attacks? I think that it should go everywhere. And the only pragmatic way for this to happen is by linking `juji gatame` to all of the sections listed above.\nWhen it comes to note taking, anything that can’t be represented by a single tree-like structure and contains links for one topic to another is better split per topic and use linking to bring pieces together.\nThis alone is enough for one to pickup wiki. Additional points for familiarity. And most importantly it is something that can be easily combined with markdown that is already mentioned above.\nHave a look at my demo wiki, to get some idea:\nThis is not my complete wiki but something that I put together for the shake of this post (with hopefully enough teasers inside). It includes:\nChunks of my personal notes Flow chart diagrams (for techniques) that I created myself (and yes, I will blog about how you can create them too). An animated gif or two that summarize techniques This might also be a nice starting point for your own wiki, if you are sold on the idea.\nCreating a markdwon based wiki for Jiu jitsu Next step is to pick ourselves up the right tool for the job. Below there are the top three candidates:\nGithub mdwiki tiddlywiki Github Github is a git hosting service.\nOversimplification alert\nThink of it as service that allows you to create public or private shared folders, that contain textual (mostly) and binary files. The service also keeps history of changes and provides a platform for collaboration with others. I wouldn’t suggest it to people not already familiar with git.\nMy demo wiki is hosted on Github, so you get the idea.\nTiddlywiki A wiki solution, that allows users to host their wiki either locally or publicly. It’s pretty extensible and one of the extensions provides markdown support. Even thought it seems pretty powerful, the installation of extensions proved to be a little bit tricky for me, so I wouldn’t recommend it either.\nmdwiki. mdwiki (as the namely implies) is a markdown based wiki. I found it pretty simple to install and use and it’s what I recommend to use in this post. Note: This solution is not standalone and does require the use of an http server (see below).\nInstalling mdwiki Go to mdwiki releases page and grab the latest release zip file. At the time of writing this was …","date":1630349280,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"7eadea2779a0e085a0978e65eae5ad49","permalink":"https://iocanel.com/2021/08/hackers-guide-to-jiu-jitsu-markdown-wiki/","publishdate":"2021-08-30T21:48:00+03:00","relpermalink":"/2021/08/hackers-guide-to-jiu-jitsu-markdown-wiki/","section":"post","summary":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nIn this section I am going to discuss why and how markdown is the ideal format for using for your notes. I am also going to conver how to use markdown in order to maintain wiki/second brain for your Jiu Jitsu notes.\n","tags":["jiu jitsu"],"title":"Hackers guide to Jiu Jitsu: Markdown Wiki","type":"post"},{"authors":null,"categories":null,"content":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nThis post is going to demonstrate how to use mplayer for watching Jiu Jitsu instructionals, in order to:\nCapture notes Create bookmarks Create animated gifs demonstrating techniques This post will cover the fundamentals and will be the base for future posts that will demonstrate integrations with ohter tools.\nWhat is mplayer ? mplayer as the name implies is a video player. It’s free \u0026amp; opensource and available for most operationg systems. It’s pretty minimal but powerful and is often used by other players as a backend.\nThere are two main features that make it stand out from the rest of the available players.\nSlave mode When mplayer is run on slave mode, it allows other programs to communicate with it, through a file. Programs append commands to the file and mplayer can pick them up. So, other programs can\nstart / stop go to a specific timestamp extract player information Custom key bindngs and commands With custom keybindings and commands users are able to easily invoke external scripts, which is very handy as we will see later on.\nWhy we need mplayer ? In previous parts in the series, we saw how we could do things like creating animated gifs. While technically it was pretty straight forward, it was not very user frindly as the user had to manually keep track of the file name and start/stop timestamps.\nmplayer running on slave mode can easily helps us create a user friendly solution to this problem.\nSometimes we just want to bookmark the video currently playing so that we can resume later on. Other times we just want to have bookmarks as a reference in our notes. Again mplayer can provide an elegant solution to these problems.\nInstalling mplayer This section describes how to install it based on your operating system.\nLinux If you are using linux chances are that you don’t really need me to tell you how to install it.\nFedora 1 sudo dnf -y install mplayer Ubuntu 1 sudo apt-get install mplayer OSX 1 brew install mplayer Windows Windows users will have to install and get familiar with wsl, first. Then:\n1 sudo apt-get install mplayer From now on all command we provide will need to go via wsl unless explicitly specified.\nSlave mode To start mplayer in slave mode:\n1 mplayer -slave -quiet \u0026lt;movie\u0026gt; Now you can enter commands in the console and read the output from there.\nOr you can use a fifo file instead:\n1 2 mkfifo \u0026lt;/tmp/fifofile\u0026gt; mplayer -slave -input file=\u0026lt;/tmp/fifofile\u0026gt; \u0026lt;movie\u0026gt; However, it’s much simler if you just configure mplayer to always run in slave mode (by adding the config below to `.mplayer/config`):\n1 2 slave=true input:file=/path/to/home/.local/share/mplayer/fifo This assumes that you’ve created up front a fifo file:\n1 2 mkdir -p ~/.local/share/mplayer mkfifo ~/.local/share/mplayer/fifo Note: You can use whatever path for the fifo file.\nUsing the slave mode We will start mplayer in slave mode and redirect it’s output in a temporary file so that we can process the command output:\n1 mplayer -slave -input file=\u0026lt;/tmp/fifofile\u0026gt; \u0026lt;movie\u0026gt; \u0026gt; \u0026lt;/tmp/output\u0026gt; Now we can start executing commands:\nGetting the file name We are going to send `get_file_name` to player in order to get the file name:\n1 2 3 echo get_file_name \u0026gt; /tmp/fifofile sleep 1 cat /tmp/output | grep ANS_FILENAME | tail -n 1 | cut -d \u0026#34;=\u0026#34; -f2 Getting the timestamp We are going to send `get_time_pos` to player in order to get the time position:\n1 2 3 echo get_time_pos \u0026gt; /tmp/fifofile sleep 1 cat /tmp/output | grep ANS_TIME_POSITION | tail -n 1 | cut -d \u0026#34;=\u0026#34; -f2 Full list of available commands You can find a complete reference of commands at: http://www.mplayerhq.hu/DOCS/tech/slave.txt\nPutting the commands together Let’s combine the commands above in order to easily create an animated gif. The idea is to have a command to:\nmark the beggining mark the end create the animated gif The following scripts will assume that the fifo file can be found at: `/.local/share/mplayer/fifo` and the output is redirected to `/.local/share/mplayer/output`.\nMark the beggining of a subsection We can use the slave mode in order to ask the player which file is currently playing and which is the currrent position in the file. We will save those under `.local/share/mplayer/filename` and `.local/share/mplayer/beginning`.\n1 2 3 4 5 6 #!/bin/bash echo get_property path \u0026gt; ~/.local/share/mplayer/fifo echo get_time_pos \u0026gt; ~/.local/share/mplayer/fifo sleep 1 cat ~/.local/share/mplayer/output | grep ANS_path | tail -n 1 | cut -d \u0026#34;=\u0026#34; -f2 \u0026gt; ~/.local/share/mplayer/filename cat ~/.local/share/mplayer/output | grep ANS_TIME_POSITION | tail -n 1 | cut -d \u0026#34;=\u0026#34; -f2 \u0026gt; ~/.local/share/mplayer/beginning Mark the end of a subsection In the same spirit we can use `.local/share/mplayer/end` in order to mark the end of a subsection.\n1 2 3 4 5 6 #!/bin/bash echo …","date":1630349280,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"75bd6a37c9a4e53c67391c0dc3a92250","permalink":"https://iocanel.com/2021/08/hackers-guide-to-jiu-jitsu-mplayer/","publishdate":"2021-08-30T21:48:00+03:00","relpermalink":"/2021/08/hackers-guide-to-jiu-jitsu-mplayer/","section":"post","summary":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nThis post is going to demonstrate how to use mplayer for watching Jiu Jitsu instructionals, in order to:\nCapture notes Create bookmarks Create animated gifs demonstrating techniques This post will cover the fundamentals and will be the base for future posts that will demonstrate integrations with ohter tools.\n","tags":null,"title":"Hackers guide to Jiu Jitsu: mplayer","type":"post"},{"authors":null,"categories":["hobbies"],"content":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nThis post is going to demonstrate how to use ffmpeg in order to:\nSplit long insturctions into logical chapters Capture screenshots Create animated gifs demonstrating techniques This post will cover the fundamentals and will be the base for future posts that will demonstrate integrations with ohter tools.\nWhat is ffmpeg ? ffmpeg is tools for recording, manipulating and streaming videos. It’s free \u0026amp; opensource and available for most operationg systems. In this post we are going to focus on the `manipulating` part.\nWhy we need ffmpeg ? Jiu Jitsu instructionals are long. Even though they are usually split into pieces, each piece easily exceeds one hour in duration and looks more like a seminar rather than a lesson. Personally, I find that in most cases it’s really hard to digest more than 10-15 minutes in a single sitting. So, I’d like to split the videos even further, maybe per scene.\nAlso, I would like to embed parts of the instructional directly inside my notes. For example, in the section where I have my notes on `Juji Gatame` I’d like to have an animated gif demonstrating the technique. In more rare cases, a single screenshot form the video would sufice (e.g. to demonstrate hand placement in various techniques).\nInstalling ffmpeg This section describes how to install it based on your operating system.\nLinux If you are using linux chances are that you don’t really need me to tell you how to install it.\nFedora 1 sudo dnf -y install ffmpeg Ubuntu 1 sudo apt-get install ffmpeg OSX 1 brew install ffmpeg Windows Windows users will have to install and get familiar with wsl, first. Then:\n1 sudo apt-get install ffmpeg From now on all command we provide will need to go via wsl unless explicitly specified.\nDetecting chapters To make long instructionals more usable:\neasier to search split in digestable chunks we will have to split them.\nLuckily, most of them contain multiple scenes each starting with a title screen, like the ones shown below:\nThis makes it possible to use ffmpeg in order to detect scenes. The idea is to detect frames that have noticable differences than the previous one.\nThis can be done using a command like:\n1 ffprobe -show_frames -of compact=p=0 -f lavfi \u0026#34;movie=instructional.mkv,select=gt(scene\\,0.8)\u0026#34; | awk -F \u0026#34;|\u0026#34; \u0026#39;{print $7}\u0026#39; | cut -f2 -d\u0026#34;=\u0026#34; The command above does the following:\ncollect frames that have 80% difference in pixels from the previous one (you can play around with this value grab the value of the 7th column of the output that contains the timestamp keep only the numeric value The result will be the timestamps in seconds of each different scene.\nThere are many things you can do with this timestamp. Below are some examples:\nSpliting the video by chapter The script below will attempt to detect chapters and split the long video accordingly.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 #!/bin/bash VIDEO=$1 EXTENSION=\u0026#34;${VIDEO##*.}\u0026#34; FRAMERATE=5 SCALE=\u0026#34;512:-1\u0026#34; begin=\u0026#34;00:00:00\u0026#34; scene=1 # For each timestamp: ffprobe -show_frames -of compact=p=0 -f lavfi \u0026#34;movie=$1,select=gt(scene\\,0.8)\u0026#34; 2\u0026gt; /dev/null | awk -F \u0026#34;|\u0026#34; \u0026#39;{print $7}\u0026#39; | cut -f2 -d\u0026#34;=\u0026#34; | while read timestamp; do #Keep the integer part of the timestamp ts=`echo $timestamp | cut -d\u0026#34;.\u0026#34; -f1` #Convert timestamp to time using the HH:mm:ss format hours=`expr $ts / 3600` if [ $hours -lt 10 ]; then hours=\u0026#34;0$hours\u0026#34; fi minutes=`expr $ts % 3600 / 60` if [ $minutes -lt 10 ]; then minutes=\u0026#34;0$minutes\u0026#34; fi seconds=`expr $ts % 60` if [ $seconds -lt 10 ]; then seconds=\u0026#34;0$seconds\u0026#34; fi end=\u0026#34;$hours:$minutes:$seconds\u0026#34; # Perform the split ffmpeg -y -i $1 -ss $begin -to $end scene-$scene.$EXTENSION \u0026lt; /dev/null 2\u0026gt; /dev/null begin=\u0026#34;$hours:$minutes:$seconds\u0026#34; let scene=$scene+1 done If the script isn’t accurate enough, you may need to tinker the pixel percentage.\nCreating animated gifs Even if splitting a large video into smaller chunks, those chunks won’t be small enough. Sometimes, you just need to get a glimpse of a technique in order to remember what it is about. It often helps using and embedding animated images direcly inside your notes (assuming they are in a digital format).\nHere’s an example an animated image that is demonstrating the `pumping method` for breaking grips:\n1 2 3 4 5 6 7 8 9 10 11 12 #!/bin/bash VIDEO=$1 BEGINNING=${2:-\u0026#34;00:00:00\u0026#34;} END=${3:-\u0026#34;00:01:00\u0026#34;} FRAMERATE=${4:-5} SCALE=${5:-\u0026#34;512:-1\u0026#34;} NAME=\u0026#34;${VIDEO%.*}\u0026#34; EXTENSION=\u0026#34;${VIDEO##*.}\u0026#34; ffmpeg -y -i \u0026#34;$VIDEO\u0026#34; -r $FRAMERATE -vf scale=$SCALE -ss $BEGINNING -to $END $NAME.gif \u0026lt; /dev/null 2\u0026gt; /dev/null done The script can then be invoked like:\n1 create-animated-gif.sh \u0026lt;your vide here\u0026gt; \u0026lt;beginning HH:mm:ss\u0026gt; \u0026lt;end HH:mm:ss\u0026gt; The challenge here is to spot and keep track of the beginning and end times. In future posts I am going to …","date":1628713200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"184ff749084d2e0f687704d3a3b928fa","permalink":"https://iocanel.com/2021/08/hackers-guide-to-jiu-jitsu-ffmpeg/","publishdate":"2021-08-11T23:20:00+03:00","relpermalink":"/2021/08/hackers-guide-to-jiu-jitsu-ffmpeg/","section":"post","summary":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, struggling with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nThis post is going to demonstrate how to use ffmpeg in order to:\nSplit long insturctions into logical chapters Capture screenshots Create animated gifs demonstrating techniques This post will cover the fundamentals and will be the base for future posts that will demonstrate integrations with ohter tools.\n","tags":["jiu jitsu","ffmpeg"],"title":"Hackers guide to Jiu Jitsu: ffmpeg","type":"post"},{"authors":null,"categories":["hobbies"],"content":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, strugglying with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nThis is the first post in a series of posts, documenting the process.\nWho is this series of posts for ? Jiu Jitsu practitioners with decent computer skills\nPeople that are using digital sources for acquiring new skills\nPeople that need to become better at learning\nSo, you may find interest in it even if you are not into Jiu Jitsu.\nIf you ever researched for topics:\nNote taking Second Brain / Mind mapping Flash cards Flow charts then chances are that you will find similar information here applied in a specific topic.\nWhat makes Jiu Jitsu a great topic to use for demonstrating learning techniques ? It probably sounds oxymoron to use a combat sport to demonstrate things like note taking, learning etc, but is it? If you are not familiar with the sport, let me just say that it has a huge learning curve, due to the vast number of techniques and contexts. It’s so vast, that its typical nowdays that super specialized video instructionals are often more than 10 hours long. On top of that it’s a constantly evolving domain, which means that all related information should be live (easily accessible and editable).\nSo, to summarize:\nHuge domain Constantly evolving Video is predominant source of learning Why ? Solving practical problems The main reason is to solve practical problems I was having while studing using instructionals and online tutotrials. You may find one or more of the questions below relatable:\nHow many times did you find a great resource online but lost it? Forgot to bookmark ? Lost bookmark ? Resource taken offline ? How many times did you find a great tip inside a huge video and it’s not easy to find it again? Video was not bookmarkable ? Video was in a DVD ? How many times you took a great note, that was then forgotten ? Lost ? Never reviewed the note ? Is it hard for you to retain the information you learn ? Have you ever felt that you need a more compact representation of knowledge than the original raw material ? Influences John Danaher is arguably one of the biggest influnces in modern Jiu Jitsu. Danaher is a Jiu Jitsu and MMA coach but also a Phd in Philosophy. I think that what makes him stand out, is his background in Philosophy, which he sucesfully applied to the sport giving him a unique perspective.\nInflunced by Danaher, I cannot stop but thinking `How could I apply my unique skills in perspective to gain an agnle in the sport?`.\nThe obvious answer, is `By using my hacker skills to solve the pratical problems mentioned in the previous section.`\nAnd this is pretty much what this series is about.\nFormat and topics The first parts of the series will include short posts related to video processing / playback tools and how the can be used to make the most out of instuctionals. Then I am going to focus on note taking. Then I am going to demonstrate how to integrate everything together and we’ll see how it goes.\nAll the tools I am going to use are going to be free and opensource tools available in all three major operation systems (Windows, OSX and Linux).\nAn example of such tools:\nffmpeg mplayer tesseract emacs Feel free to drop a comment with requests, suggestions and feedback!\nPost index Hackers guide to Jiu Jitsu: intro wordpress version github version Hackers guide to Jiu Jitsu: ffmpeg wordpress version github version Hackers guide to Jiu Jitsu: mplayer wordpress version github version Hackers guide to Jiu Jitsu: markdown wiki wordpress version github version Hackers guide to Jiu Jitsu: flowcharts wordpress version github version ","date":1628711160,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"eaa935b3633d4c060522fb98e3127de6","permalink":"https://iocanel.com/2021/08/hackers-guide-to-jiu-jitsu/","publishdate":"2021-08-11T22:46:00+03:00","relpermalink":"/2021/08/hackers-guide-to-jiu-jitsu/","section":"post","summary":" Intro I am a 40+ software engineer and recreational Jiu Jitsu practitioner, strugglying with vast amount of information related to the sport. I decided to make use of my `computer` skills to aid me in the process of taming this new skill.\nThis is the first post in a series of posts, documenting the process.\nWho is this series of posts for ? Jiu Jitsu practitioners with decent computer skills\n","tags":["jiu jitsu"],"title":"Hackers guide to Jiu Jitsu","type":"post"},{"authors":null,"categories":["development"],"content":"Intro I used to be pretty vocal about things I work on. I used to write blogs, give conference talks or occasionally create short vlog kind of videos. If there is one topic I’ve completely missed, that is sundrio.\nSo, what is sundrio ?\nsundrio is a code generation toolkit for generating code that no one wants to write by hand and everyone enjoys using. Besides the code generation frameworks, it also comes with modules (they are actually framework applications) for generating things like:\nBuilders Domain Specific Languages Any kind of boiler plating code (via templates) and can be used in many different contexts including annotation processing, build tool plugins and more.\nA little bit of history At some point I used to work on project that contained many different builders. Everyone on the project agreed on the value of immutability and builders. Unfortunately, everyone had a different idea on how a builder should look like. Some builders were using prefixes like `with`, others were using `set`, others no prfeix at all and there were even builders with no `build` method. It was clear to me that a tool for generating those builders was needed. And based on my experiences that generator needed to support at least:\nObject hierarchies Nesting This was a tool I never found the time to create.\nIn September 2014, I was sitting with a colleague in the airport in Rome waiting for my connecting flight. After dinner, he grabbed his laptop and started coding. He mentioned that he was experimenting on an annotation processor of shorts to solve a problem he had. I took out my laptop too and started a poc on an annotation processor that would generate simple builders. Over time the builders became less simple and I started adding more and more features … Still, I didn’t have the time to make it a real project and properly promote it …\nAnd then kubernetes happened!\nI had the privilege to work on a team of early adopters and we started doing kubernetes things in java.\nThe code used to look like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 Pod pod = new Pod(); Metadata metadata = new Metadata(); metadata.setName(\u0026#34;my-pod\u0026#34;); Container container = new Container(); container.setName(\u0026#34;nginx\u0026#34;); container.setImage(\u0026#34;nginx:1.20.1\u0026#34;); PodSpec spec = new PodSpec(); spec.setContainers(Arrays.asList(container)); pod.setMetadata(metadata); pod.setSpec(spec) … and it got uglier and uglier as more complex resources came into picture!\nSo, I decided to try out my builder generator the kubernetes domain and see how it would look like.\n1 2 3 4 5 6 7 8 9 10 11 Pod pod = new PodBuilder() .withNewMetadata() .withName(\u0026#34;my-pod\u0026#34;) .endMetadata() .withNewSpec() .addNewContainer() .withName(\u0026#34;nginx\u0026#34;) .withImage(\u0026#34;nginx:1.20.1\u0026#34;) .endContainer() .endSpec() .build(); While the amount of code is not significantly less, it is way more fluent and it becomes much easier to read and write due to its structural similarity with how these resources are represented in json or yaml:\n1 2 3 4 5 6 7 kind: Pod metadata: name: my-pod spec: containers: - name nginx - image: ngnix:1.20.1 On top of that add the completion offered by modern IDEs and you get something way more pleasant to use.\nSo, the builder generator was released as project called sundrio so that it can be used by the fabric8 kubernetes client. Later on, the official kubernetes client also adopted sundrio, so you could say sundrio builders have become the standard way to manipulate kubernetes resources in java.\nOver time, different features and modules were added that could be used outside of the context of builder generators, so it’s pretty much more like a library/framework for code generation rather than anything else.\nUsing sundrio In this section I’ll walk you around the core sundrio concepts. I will start with the core java framework, which you can use for code generation and then I’ll focus on applications of the framework which can be used by users without having to worry much about the sundrio internals (e.g. the builder generator).\nManipulating java code In the core of sundrio lies the domain model, which represents core java types and consturcts. It can be used to define types programmatically that can be then rendered into source:\n1 2 3 4 5 6 7 8 9 TypeDef greeter = new TypeDefBuilder() .withKind(Kind.Inteface) .withName(\u0026#34;Greeter\u0026#34;) .addNewMethod() .withName(\u0026#34;helloWorld\u0026#34;) .endMethod() .build(); System.out.println(greeter.render()); The code above will output:\n1 2 3 interface Greeter { void helloWorld(); } Of course, no one really defines types from scratch programmatically. In most cases an input is used. The input is usually an other class in the form of source or class file. So sundrio, provides a series of adapters that people can use to adapt existing classes, source files, etc into `TypeDef` instances.\nAnnotation procssing One of the most common cases is when using annotation processing:\n1 2 AptContext aptContext = AptContext.create(processingEnv.getElementUtils(), processingEnv.getTypeUtils()); TypeDef …","date":1628144340,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"ed0681cdff0501a584b0464423b71739","permalink":"https://iocanel.com/2021/08/sundrio-a-framework-for-generating-code-that-no-one-wants-to-write/","publishdate":"2021-08-05T09:19:00+03:00","relpermalink":"/2021/08/sundrio-a-framework-for-generating-code-that-no-one-wants-to-write/","section":"post","summary":"Intro I used to be pretty vocal about things I work on. I used to write blogs, give conference talks or occasionally create short vlog kind of videos. If there is one topic I’ve completely missed, that is sundrio.\nSo, what is sundrio ?\nsundrio is a code generation toolkit for generating code that no one wants to write by hand and everyone enjoys using. Besides the code generation frameworks, it also comes with modules (they are actually framework applications) for generating things like:\n","tags":["sundrio"],"title":"Sundrio: A framework for generating code that no one wants to write","type":"post"},{"authors":null,"categories":["hints"],"content":"This is not a blog post. This is my Emacs powered nutrition tracker!\nNo, I mean it!\nIt’s the one file that contains all the code, templates and data of my tracker, exported in html.\nKeep reading, to see how you can harness the power of emacs and org mode to track your nutrition and even generate cool graphs like:\nFor quick demo you can check this short Youtube demo: Nutrition tracking using Emacs.\nThe post is available in 3 formats.\nMy Wordpress Blog (wordress sucks at rendering lisp, so the other formats are preferred) My Github Blog Gist All 3 formats are powered by a single org file called `nutrition.org`.\nFeel free to grab that file, save it in your computer and add the following line to your emacs configuration:\n1 (org-babel-load-file \u0026#34;/path/to/nutrition.org\u0026#34;) The idea Traditional nutrition tracking applications don’t work for me. I find it really hard to select the foods I ate from paginated, lists of checkboxes (… even saying it out loud feels like cursing). Same goes to defining recipes.\nWhat these apps do really well is looking up base nutrient value, which is something you only need to do once per food. For capturing meals (i.e. daily use) they suck!\nOn the other hand, using org-mode for capturing stuff (TODOs, ideas, notes), is really awesome!\nSo, I decided to create 3 lists:\nFoods Recipes 1 These lists will hold, individual nutrient values per food, per recipe and per day. The data will be captured in org mode tables (think of text file spreadsheets).\nFinally, the data will be aggregated into a table holding daily nutrition stats and a graph will be created from this table.\nSetup The whole setup is based on org-mode. This is a single org file that contains everything:\ndocs templates code config data Requirements To be able to sucesully use the templates and code provided you will need to have `org-ql` installed in your system.\n1 (use-package org-ql) Additionally, you will need the following custom code:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 (defun iocanel/org-heading (heading tags) \u0026#34;Format the HEADING and the TAGS in the desired way.\u0026#34; (format \u0026#34;%-80s %s\u0026#34; heading tags)) (defun iocanel/org-trim-tags (h) \u0026#34;Removes all tags that are present in H.\u0026#34; (if h (string-trim (replace-regexp-in-string \u0026#34;:[a-zA-Z0-9_\\\\.-:]+:$\u0026#34; \u0026#34;\u0026#34; h)) nil)) (defun iocanel/org-get-entries (tag \u0026amp;optional f) (interactive) \u0026#34;Collects all headings that contain TAG from the current buffer or from file F.\u0026#34; (if f (mapcar #\u0026#39;iocanel/org-trim-tags (org-map-entries #\u0026#39;org-get-heading tag \u0026#39;file)) (mapcar #\u0026#39;iocanel/org-trim-tags (org-map-entries #\u0026#39;org-get-heading tag \u0026#39;agenda)))) (defun iocanel/org-get-property (file name tag property) \u0026#34;Extract the PROPERTY for NAME tagged with TAG in org FILE.\u0026#34; (cdr (assoc property (car (org-ql-query :select #\u0026#39;org-entry-properties :from file :where `(and (tags ,tag) (equal ,name (org-get-heading t t)))))))) Templates This section contains the templates used. These are org-mode capture templates. The templates follow these expansion rules.\nFood template We need to hold nutrient values per food. So, we are going to use a table per food with these value. So, for each food we have an org item that contains the table. The structure of the such a list item is shown below:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 * %^{Food Name} :nutrition:food: :PROPERTIES: :UNIT: %^{Unit|gram|ml|slice|pcs|tspn|spn} :QUANTITY: %^{Quantity} :WEIGHT: %^{Weight in grams} :CALORIES: %^{Calories} :PROTEIN: %^{Protein} :CARBS: %^{Carbs} :FAT: %^{Fat} :END: %? #+TBLNAME: %\\1 | | INGREDIENT | SERVING | QUANTITY | CALORIES | PROTEIN | CARBS | FAT | |---+------------+---------+----------+----------+---------+-------+-----| | # | %\\1 | 1 | %\\4 | %\\5 | %\\6 | %\\7 | %\\8 | Recipe template We need the same for recipes. The only difference is that a recipe may contain multiple foods as ingredients. So, we have row per food.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 * %^{Recipe Name} :nutrition:recipe: :PROPERTIES: :MAIN_INGRIDIENT: %^{Food|%(string-join (iocanel/org-get-entries \u0026#34;+food\u0026#34;) \u0026#34;|\u0026#34;)} :SECOND_INGRIDIENT: %^{Food|None|%(string-join (iocanel/org-get-entries \u0026#34;+food\u0026#34;) \u0026#34;|\u0026#34;)} :THIRD_INGREDIENT: %^{Food|None|%(string-join (iocanel/org-get-entries \u0026#34;+food\u0026#34;) \u0026#34;|\u0026#34;)} :FOURTH_INGREDIENT: %^{Food|None|%(string-join (iocanel/org-get-entries \u0026#34;+food\u0026#34;) \u0026#34;|\u0026#34;)} :END: #+TBLNAME: %\\1 | | INGREDIENT | SERVING | QUANTITY | CALORIES | PROTEIN | CARBS | FAT | |---+------------+----------+----------+----------+---------+--------+-----| | # | %\\2 | 1 | | | | | | | # | %\\3 | 1 | | | | | | | # | %\\4 | 1 | | | | | | | # | %\\5 | 1 | | | | | | |---+------------+----------+----------+----------+---------+--------+-----| | # | Total | | | | | | | #+TBLFM: $4=\u0026#39;(iocanel/get-recipe-property $2 $3 \u0026#34;QUANTITY\u0026#34;)::$5=\u0026#39;(iocanel/get-recipe-property $2 $3 \u0026#34;CALORIES\u0026#34;)::$6=\u0026#39;(iocanel/get-recipe-property $2 $3 \u0026#34;PROTEIN\u0026#34;)::$7=\u0026#39;(iocanel/get-recipe-property $2 $3 \u0026#34;CARBS\u0026#34;)::$8=\u0026#39;(iocanel/get-recipe-property $2 $3 …","date":1585928340,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"dc10eede2cae41019b4b7b816c573d09","permalink":"https://iocanel.com/2020/04/nutrition-tracking-using-emacs/","publishdate":"2020-04-03T18:39:00+03:00","relpermalink":"/2020/04/nutrition-tracking-using-emacs/","section":"post","summary":"This is not a blog post. This is my Emacs powered nutrition tracker!\nNo, I mean it!\nIt’s the one file that contains all the code, templates and data of my tracker, exported in html.\nKeep reading, to see how you can harness the power of emacs and org mode to track your nutrition and even generate cool graphs like:\nFor quick demo you can check this short Youtube demo: Nutrition tracking using Emacs.\n","tags":["emacs","org-mode"],"title":"Nutrition tracking using Emacs","type":"post"},{"authors":null,"categories":["development"],"content":"Prologue ap4k is a collection of java annotations and processors for generating, customizing and testing kubernetes and openshift manifests.\nThe idea of using java annotations for customizing kubernetes and openshift manifests is not something entirely new. In 2015 fabric8 provided an artifact called `kubernetes-generator` (not to be confused with other generators under the fabric8 umbrella) that allowed developers to hook into the compilation process code that customized these manifests. The way the code was hooked into the compilation processors was via java annotations. The idea was nice but did required developers to write actual code, and thus was soon abandoned as in favor of the fabric8-maven-plugin which was rewritten at the same time by Rolland Huss.\nAn other approach that used java annotations for similar purposes was metaparticle created by Brendand Burns few years later. I really liked some aspects of it, though I couldn’t get used to the idea that a lot of things were done on `run` time.\nWhat I wanted instead is something that is taking place purely on `compile` time. I wanted something with the power of fabric8-maven-plugin, but without the paying the toll of having to write configuration in `xml’. So, you could say that I was after an annotation based configuration layer for fabric8-maven-plugin.\nOr even better ….\n… not only for fabric8-maven-plugin but for any combination of build system and jvm language that supports annotations.\nAnd that is the rationale behind ap4k.\nA first glance at ap4k To trigger the generation of kubernetes manifests during compilation, one needs to add the `@KubernetesApplication` annotation on top of the main class:\n1 2 3 4 5 6 import io.ap4k.kubernetes.annoation.KubernetesApplication; @KuberentesApplication public class Main { //your code here } Once the compilation is done the following files are expected to be generated relative to the class output directory:\nMETA-INF/ap4k/kubernetes.json META-INF/ap4k/kubernetes.yml If you want to try it out by yourself you can check the kubernetes example that’s included in ap4k.\nFor openshift users, `@OpenshiftApplication` is available and will generate openshift flavored manifests (see openshift example). Whatever applies to `@KubernetesApplication` also `@OpenshiftApplication` so the rest of this post will just mention the first.\nThe same functionality could be provided by any scaffolding tool, or even template engine only this time its done using annotations ….\n… and here is were things get interesting.\nCustomizing the generated manifests Customization to the manifests can be done using the `@KubernetesApplication`, for example to add a label:\n1 2 3 4 5 6 import io.ap4k.kubernetes.annoation.KubernetesApplication; @KuberentesApplication(labels=@Label(key=\u0026#34;foo\u0026#34;, value=\u0026#34;bar\u0026#34;)) public class Main { //your code here } Or to expose a port:\n1 2 3 4 5 6 7 import io.ap4k.kubernetes.annoation.KubernetesApplication; import io.ap4k.kubernetes.annoation.Port; @KuberentesApplication(port=Port(name=\u0026#34;http\u0026#34;, containerPort=8181)) public class Main { //your code here } The addition of a port will result in having the container decorated with a `containerPort` and the manifest including a `Service` resource pointing to the defined port.\nWhat’s more interesting is that if jaxrs, spring rest etc annotations are detected in the code, then ap4k will perform the step above automatically(without the need to explicitly define the port).\nThis is demonstrated in: spring boot on kubernetes example.\nIntegration Testing While ap4k was in its early development, the need of running integration tests was pressing. So for the internal needs of the project junit5 extensions were added.\nThe role of the extensions were to orchestrate integration tests for kubernetes:\nperform container builds deploy generated resources wait until application is deployed run the actual tests Users familiar with arquillian-cube should see a resemblance here. With the exception of performing container builds the rest is a subset of arquillian-cube functionality.\nThe more these extensions were used, the more apparent it became that they should not be just for internal use, but something that all ap4k users could use….\nA closer look that the junit5 extension for kubernetes. This extension provides the `@KubernetesIntegrationTest` annotation. The presence of this annotation in a test class triggers the extension.\n1 2 3 4 5 6 import io.ap4k.testing.annotation.KubernetesIntegrationTest; @KubernetesIntegrationTest public class ExampleIT { //test code goes here } This alone is enough to at least test that the generated manifests can be successfully applied to the point were the application starts and becomes ready. Of course, users would also want to perform integration tests on the actual application too (e.g. send http requests etc). For those cases, its possible to inject application `Pod` into the tests and from there the users can decide how to proceed.\nHere’s an example that uses the …","date":1546875540,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"c1686ad04290ea128e472b3519e40123","permalink":"https://iocanel.com/2019/01/introducing-ap4k/","publishdate":"2019-01-07T17:39:00+02:00","relpermalink":"/2019/01/introducing-ap4k/","section":"post","summary":"Prologue ap4k is a collection of java annotations and processors for generating, customizing and testing kubernetes and openshift manifests.\nThe idea of using java annotations for customizing kubernetes and openshift manifests is not something entirely new. In 2015 fabric8 provided an artifact called `kubernetes-generator` (not to be confused with other generators under the fabric8 umbrella) that allowed developers to hook into the compilation process code that customized these manifests. The way the code was hooked into the compilation processors was via java annotations. The idea was nice but did required developers to write actual code, and thus was soon abandoned as in favor of the fabric8-maven-plugin which was rewritten at the same time by Rolland Huss.\n","tags":["java","kubernetes","openshift","dekorate"],"title":"Introducing Ap4K","type":"post"},{"authors":null,"categories":["development"],"content":"Prologue As I am approaching my 40s its becoming harder and harder to get really excited with a new framework. There are of course some exception to this rule and micronaut is such an exception. I won’t get into details here, but in many ways I feel that micronaut is a framework I would like to have written myself…\nSo, this post is going to be a first look at micronaut. It will include:\nan introduction my first application packaging the application as a docker image packaging and running inside openshift. What is micronaut? According to the official documentation micronaut is a micro services framework, for building modular and testable microservice applications. Some highlights:\nOwn DI Autoconfiguration Service Discovery Http Routing and more…\nThe framework has been created from the same team that brought us grails and it does look like it in many ways. When it comes to features however, it feels like a combination of spring boot and spring cloud that promises to be more lightweight.\nMore lightweight? Traditional DI approaches in Java be it spring, CDI etc, is built around reflection, proxies etc. Not so long ago there was an effort aiming mostly mobile devices that was built around the idea of handling most of the problem at compile time instead of runtime. The project was called dagger. I am not sure how it went in terms of adoption, but I didn’t feel it ever had a strong presence in the enterprise world.\nWhat does these have to do with micronaut?\nmicronaut is using a similar approach with dagger, relying more on annotation processors instead of using reflection, proxies etc.\nGetting started The first thing one needs to get started with micronaut is the `mn` binary, which gives you access to a grails-like cli:\nInstallation To install the `mn` the documentation suggests the use of sdkman (I’ve also blogged on sdkman here).\n1 sdk install micronaut Creating a hello world example Once the installation is complete you can create a new micronaut application using the cli:\n1 mn create-app helloworld The generated project is a docker-ready gradle project that contains just a single class:\n1 2 3 4 5 6 7 8 9 10 package helloworld; import io.micronaut.runtime.Micronaut; public class Application { public static void main(String[] args) { Micronaut.run(Application.class); } } Note, that options are provided to select language, build tool and testing framework.\nThis will be very familiar to spring boot users.\nNow, let’s see how we can create a rest controller. From within the helloworld directory:\n1 mn create-controller HelloController The command will generate the controller class and also a test for the controller.\nThe controller out of the box will just provide a single method that returns http status `OK`. That can be easily modified, to:\n1 2 3 4 5 6 7 8 9 10 11 12 13 package helloworld; import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; @Controller(\u0026#34;/hello\u0026#34;) public class HelloController { @Get(\u0026#34;/\u0026#34;) public String index() { return \u0026#34;Hello World!\u0026#34;; } } To run the application you can just use:\n1 ./gradlew run Noteworthy It seems that its possible to specify things like language and testing framework not only on application level but also on controller level too. So for instance we can add a second controller in kotlin:\n1 mn create-controller KotlinController --lang kotlin The code generation part worked a treat, however I wasn’t able to get the kotlin controller (inside a java project) running even when I manually added the kotlin plugin inside the `build.gradle` file.\nPackaging the application As mentioned above the generated app is docker-ready. Meaning that it comes with a docker file.\n1 docker build -t iocanel/mn-helloworld:latest . The first time I tried to build the image, it failed and that was due to the fact that the docker build relies on copying the jar that’s expected to be build locally. While, I am not against this approach, when its not coordinated by an external tool (e.g. fabric8 maven plugin) it does feel a bit weird.\nSecond attempt:\n1 2 ./gradlew build docker build -t iocanel/mn-helloworld:latest . This time everything worked smoothly! Let’s see what we got in terms of size and startup times compared to spring boot.\njar uberjar docker startup time micronaut 1.4K 12M 114M 0.892 sec spring boot 3.4 16M 119M 2.232 sec Please note that these measurements are simplistic, they are not meant to prove anything and are there just give a very rough idea of the overall behavior of micronaut.\nPackaging and running inside Openshift For vanilla kubernetes the packaging process doesn’t differ much. In this section I’ll describe how you can package and run the application in openshift.\nThe first step is to define a binary build. The binary build will use the `source to image` for java. Once the build is defined, we can start it and pass the folder that contains the micronaut uberjar as a parameter.\n1 2 oc new-build --binary --strategy=source --name=helloworld …","date":1540467540,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"939c02a4fe5c09097c47cc1a245d8b98","permalink":"https://iocanel.com/2018/10/micronaut-introduction/","publishdate":"2018-10-25T14:39:00+03:00","relpermalink":"/2018/10/micronaut-introduction/","section":"post","summary":"Prologue As I am approaching my 40s its becoming harder and harder to get really excited with a new framework. There are of course some exception to this rule and micronaut is such an exception. I won’t get into details here, but in many ways I feel that micronaut is a framework I would like to have written myself…\nSo, this post is going to be a first look at micronaut. It will include:\n","tags":["java","micronaut"],"title":"Micronaut: Introduction","type":"post"},{"authors":null,"categories":["tools"],"content":"Prologue I recently came across micronaut one of the many java micro-frameworks that gain a lot of interest lately. This particular framework was being installed locally using a tool that I haven’t come accross before: sdkman.\nThis will be a really short post about sdkman.\nWhat is sdkman? Even if you only use a computer for playing games, sooner or later you are going to have to manage multiple versions of the same piece of software. Now, if you are into development then its possible that you’ve either have a handcrafted solution or using one provided by the operating system.\nHandcrafted Be it the jdk itself, maven or even my IDE, I used to throw everything under ~/tools as versioned directories (e.g. maven-2.2.9, maven-3.3.5 etc) and then use symbolic links so that I have a fixed name (e.g. maven) linked to a versioned folder (maven -\u0026gt; maven-3.3.5). My PATH only included the link and not the versioned folder, so switching versions was just a matter of pointing the link to a different version.\nOf course, this is one of the many ways to do things and is only described here to emphasize on the importance of tools like sdkman.\nOperating system tools The last couple of years I’ve been mostly using linux and most of the distributions I’ve used included some sort of tooling for maintaining multiple versions of popular packages. Currently, I am on archlinux and for managing multiple versions of java is using `archlinux-java` as described: https://wiki.archlinux.org/index.php/Java. Other distributions have similar tools.\nThis is definitely an improvement compared to the manual approach described above, but don’t expect to find support for more exotic stuff.\nMy understanding on sdkman is that its aiming to fill that gap for all `sdks`.\nInstallation The installation process is straight forward and its just a simple command:\n1 curl -s \u0026#34;https://get.sdkman.io\u0026#34; | bash and then for initialization:\n1 source \u0026#34;$HOME/.sdkman/bin/sdkman-init.sh\u0026#34; This will modify the bash/zsh rc files, so that it adds an export to the SDKMAN_DIR and also add the sdkman initialization. While this is no biggie, as a lot of tools now days tend to modify your rc files, I am not really fond of this approach.\nTo verify the installation:\n1 sdk version Using sdkman To use sdkman you just need to use the `sdk` function. As I was curious to see what sdks are supported, the first thing I tried was the list operation:\n1 sdk list This generated a long list, with things like:\nant maven gradle sbt spring boot micronaut java visualvm and more…\nInstalling an sdk I will use sdkman to install kotlin in my environment\n1 sdk install kotlin Installing a specific version of an sdk This installed version 1.2.71. But what if I wanted to install an older version? Say `1.2.70`?\n1 sdk install kotlin 1.2.70 The older version got installed, but I was also prompted to select which one will be the default one.\nThis is really neat. I can verify that the version was successfully installed using the kotling binary:\n1 kotlin -version Changing the default version of an sdk Not if I wanted to switch again to the latest version:\n1 sdk default version 1.2.71 if no version is explicitly specified sdkman will set as default the latest stable. That’s an other nifty feature.\nBroadcast messages One other thing that I liked is that some of the sdk commands, do display a broadcast message, that informs the user of new version available etc. Really useful!\nClosing thoughts sdkman is not a tool that will change the world, but it does a simple job and does it really well. I’d like to see more sdks supported and of course not just java based ones. Personally, I am even tempted to use it for the java itself, given that nowdays the releases are so often that its hard to keep up!\n","date":1539694620,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"41faea0701086e8bb20118ea493c6693","permalink":"https://iocanel.com/2018/10/a-quick-look-at-sdkman/","publishdate":"2018-10-16T15:57:00+03:00","relpermalink":"/2018/10/a-quick-look-at-sdkman/","section":"post","summary":"Prologue I recently came across micronaut one of the many java micro-frameworks that gain a lot of interest lately. This particular framework was being installed locally using a tool that I haven’t come accross before: sdkman.\nThis will be a really short post about sdkman.\nWhat is sdkman? Even if you only use a computer for playing games, sooner or later you are going to have to manage multiple versions of the same piece of software. Now, if you are into development then its possible that you’ve either have a handcrafted solution or using one provided by the operating system.\n","tags":["java","sdkman"],"title":"A quick look at sdkman","type":"post"},{"authors":null,"categories":["devops"],"content":"Introduction This is the second post in my series about the service catalog. If you haven’t done already please read the first post: service catalog: introduction.\nIn this second post I’ll create from scratch a spring boot application that exposes a JPA crud via rest. This application will use a service catalog managed microsoft sql server database and I will demonstrate how you can automagically connect to it using the service catalog connector.\nThe spring cloud connector There is a spring cloud project called spring cloud connectors. This project is all about connecting to cloud managed services. I have been working on an implementation specific to the service catalog. The idea is that you can use the service catalog to manage the services and use the service catalog connector to transparently connect to it.\nAt the moment it supports only relational databases, but support for additional services will be added shortly.\nPreparation Most of the preparation has been already performed in the previous post but I’ll recap:\nStarted an openshift cluster. Installed the service catalog. Provisioned a microsoft sql server database instance (ironically) called `mymssql`. So what’s left?\nWe need to also configure permissions…\nAllowing our app to talk to the service catalog Out of the box (if we logged in as admins) we can list brokers, service classes, instances and bindings using `svcat`. Unfortunately, this is not the case for our application. The default service account is not expected to have permissions, so we need to grant them:\n1 2 oc adm policy add-cluster-role-to-user system:openshift:service-catalog:aggregate-to-view system:serviceaccount:myproject:default oc adm policy add-cluster-role-to-user system:aggregate-to-admin system:serviceaccount:myproject:default The command above granted service catalog view permissions to the default service account of my project (which is literally called `myproject` and its the default project created for us).\nNow, we are ready to move to the actual application.\nThe actual code I’ll use the spring boot cli to generate a jpa rest application:\n1 spring init -d=data-jpa,data-rest,sqlserver demo.zip To easily deploy the project into kubernetes/openshift add the fabric8 maven plugin to your pom.xml:\n1 2 3 4 5 \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;io.fabric8\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;fabric8-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.5.40\u0026lt;/version\u0026gt; \u0026lt;/plugin\u0026gt; Now, lets create an entity. How about a `Person`? Our person will be a simple JPA annotated POJO, with just:\nid first name last name … and it could look like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; @Entity public class Person { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String firstName; private String lastName; public Long getId() { return this.id; } public void setId(Long id) { this.id=id; } public String getFirstName() { return firstName; } public void setFirstname(String firstName) { this.firstName=firstName; } public String getLastName() { return lastName; } public void setLastname(String lastName) { this.lastName=lastName; } To easily perform CRUD operations for our Person we need a repository. Here’s one that uses PagingAndSortingRepository from spring data.\n1 2 3 4 5 6 7 8 9 10 11 12 import org.springframework.data.repository.PagingAndSortingRepository; import org.springframework.data.repository.query.Param; import org.springframework.data.rest.core.annotation.RepositoryRestResource; import java.util.List; @RepositoryRestResource(collectionResourceRel = \u0026#34;people\u0026#34;, path = \u0026#34;people\u0026#34;) public interface PersonRepository extends PagingAndSortingRepository { List findByLastName(@Param(\u0026#34;name\u0026#34;) String name); } JPA-wise the last thing we need is to include some microsoft sql server specifc configuration in our application.properties:\n1 2 3 4 5 spring.jpa.hibernate.ddl-auto=create-drop spring.jpa.show-sql=true spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLServerDialect spring.jpa.database-platform=org.hibernate.dialect.SQLServerDialect spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults = false And now we are done! Wait, how do we make the application talk to our sql server?\nAdding the service catalog connector We just need to add the connector to the class path:\n1 2 3 4 5 \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;me.snowdrop\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;servicecatalog-connector\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.2\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; And also create a simple bean for our DataSource:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 import org.springframework.cloud.config.java.AbstractCloudConfig; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import javax.sql.DataSource; @Configuration public class CloudConfig extends AbstractCloudConfig { @Bean public DataSource dataSource() { …","date":1536821400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"776723317c0774305199df69a8b27b83","permalink":"https://iocanel.com/2018/09/service-catalog-connector/","publishdate":"2018-09-13T09:50:00+03:00","relpermalink":"/2018/09/service-catalog-connector/","section":"post","summary":"Introduction This is the second post in my series about the service catalog. If you haven’t done already please read the first post: service catalog: introduction.\nIn this second post I’ll create from scratch a spring boot application that exposes a JPA crud via rest. This application will use a service catalog managed microsoft sql server database and I will demonstrate how you can automagically connect to it using the service catalog connector.\nThe spring cloud connector There is a spring cloud project called spring cloud connectors. This project is all about connecting to cloud managed services. I have been working on an implementation specific to the service catalog. The idea is that you can use the service catalog to manage the services and use the service catalog connector to transparently connect to it.\n","tags":["kubernetes","svcat","java","spring"],"title":"Service Catalog: Connector","type":"post"},{"authors":null,"categories":["devops"],"content":"Overview This is the first of a series of posts around the service catalog. The end goal is to demonstrate how the service catalog can simplify building apps on kubernetes and openshift.\nThe first part will cover:\nwhy how to install how to use The target environment will be openshift 3.10 on Linux using `oc cluster up` for development purposes.\nIntroduction Working with kubernetes since its early days, there are countless of times where I had to go into creating manifests for the services my application is using. By services I am referring to things like databases, messaging systems, or any other pieces of third party software my application might need.\nEach time, the process is the same:\nFind a suitable docker image. Search for matching manifests. Try out. Rinse and repeat. And even when all things are in place I have to find a way of letting my application know `how to connect` to the service. And of course, this only applies to services that are running `side by side` with the application.\nWhat about external services?\nThe service catalog is a solution that brings service brokers as defined by the open service broker api to kubernetes.\nIt provides a couple of new kind of resources that define:\nservice broker service types service instances service bindings If you want to familiarize yourself with the purpose of those, please check the qm\nPreparation To manipulate the service catalog resources from the command line you will need the service catalog client.\nThe service catalog client You will need to `svcat` binary to interact with the catalog from the command line.\nOn my linux machine this can be done:\n1 2 3 4 curl -sLO https://download.svcat.sh/cli/latest/linux/amd64/svcat chmod +x ./svcat mv ./svcat /usr/local/bin/ svcat version --client Full instructions (for all operating systems) can be found in the service catalog installation guide.\nPreparing the environment Installing the service catalog I will be using for openshift 3.10 which I’ll start directly using:\n1 2 oc cluster up oc login -u system:admin Then I just need to add the service catalog and a broker:\n1 2 oc cluster add service-catalog oc cluster add automation-service-broker Validating the setup To make sure everything is fine let’s list the available brokers:\n1 svcat get brokers The output should contain `openshift-automation-broker`.\nProvision a service: Now, lets create the database. I’ll be using microsoft sql server. So let’s see what the broker we installed has to offer:\n1 svcat get plans | grep mssql If not obvious, this will list all the available classes and plans for ms sql server (classes refer to the service type and plan refers to the different flavors e.g. persistent).\n1 svcat provision --class dh-mssql-apb --plan ephemeral mymssql Our database should be provisioned soon. Now all we need to do is to create a binding that our application will use to connect to the service.\nBinding to the service 1 svcat bind mymssql What this actually does is that it create a new `Secret` with all the connection information and it also creates a `ServiceBinding` which binds the instance we created with the secret. Any piece of code that needs to connect to the service we created can use the secret (in whatever way its preferable).\nIn the next part of this series we will introduce you to a tool that allows spring boot applications to automagically connect to service catalog managed services.\nStay tuned !\n","date":1536744120,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"382c0c2f7fbe08470c27e0c70c1f3fc0","permalink":"https://iocanel.com/2018/09/service-catalog-introduction/","publishdate":"2018-09-12T12:22:00+03:00","relpermalink":"/2018/09/service-catalog-introduction/","section":"post","summary":"Overview This is the first of a series of posts around the service catalog. The end goal is to demonstrate how the service catalog can simplify building apps on kubernetes and openshift.\nThe first part will cover:\nwhy how to install how to use The target environment will be openshift 3.10 on Linux using `oc cluster up` for development purposes.\nIntroduction Working with kubernetes since its early days, there are countless of times where I had to go into creating manifests for the services my application is using. By services I am referring to things like databases, messaging systems, or any other pieces of third party software my application might need.\n","tags":["kubernetes","svcat"],"title":"Service Catalog: Introduction","type":"post"},{"authors":null,"categories":["hints"],"content":"Overview This is a small post that describes how I made authoring markdown, org-mode etc easier by using snippets that help me handle links like a pro.\nPrologue I am a heavy user of org-mode. I use it for taking notes, writing blogs, presentations and so on. As a software developer I often use markdown too. In both cases at some point I have to deal with links.\nEmbarrassingly enough, I used to rely on my browsers bookmarks to handle links, so my workflow looked a little like:\nopen the browser search for the url of interest in bookmarks copy the url jump to the editor use the special syntax for adding links paste the copied url add a text to appear on the link I find this so counter-intuitive that is hands down the most boring thing for me when it comes to writing.\nQuickmarks All the cool kids should try… A year back (maybe) more I came across a video for linux geeks, according to which all the cool kids should try qutebrowser. And so I did.\nAmong other a feature of this browser I liked was quickmarks.\nWhat are quickmarks? Quickmarks are just labeled bookmarks.\nAnd what so special about them? It makes it easy to search for bookmarks, since the `open` action does not just search in history and bookmarks it also allows you to search by label. It might not sound much, but it did add a lot to my browsing experience.\nWhat does it have to do with writing docs, blogs and presentations? I’d like my editor to support quickmarks, so that I don’t have to jump back and forth to my browser.\nAs an emacs user adding quickmarks is as easy as adding the following code to my config:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 (defvar quickmarks-alist \u0026#39;() \u0026#34;A list of labeled links for quick access\u0026#34;) (setq quickmarks-alist \u0026#39;( ;; emacs (org-mode . https://orgmode.org) (emacs . http://emacs.org) (spacemacs . http://spacemacs.org) (yasnippets . https://github.com/joaotavora/yasnippet) ;; linux (i3 . https://i3wm.org) (mutt . http://www.mutt.org) (weechat . https://weechat.org) (qutebrowser . https://qutebrowser.org) (ranger . https://github.com/ranger/ranger) ;; work (docker . https://docker.io) (fabric8 . https://fabric8.io) (kubernetes . https://kubernetes.io) (openshift . https://openshift.com) (snowdrop . https://snowdrop.me) (spring . https://spring.io) (spring cloud . https://cloud.spring.io) (spring boot . https://spring.io/projects/spring-boot) (spring cloud connectors . https://cloud.spring.io/spring-cloud-connectors) (jenkins . https://jenkins.io) ) ) And then to easily access those links I can use a function as follows:\n1 2 3 4 (defun quickmarks-get (k) \u0026#34;Get the value of the quickmark with the key K.\u0026#34; (alist-get (intern k) quickmarks-alist) ) Creating quickmark aware snippets For writing snippets I use yasnippets. yasnippets allow users to integrate code into the snippet and that will help us look up our quickmarks from within the snippet.\nSo for markdown documents the snippet looks like:\n1 2 3 4 5 # -*- mode: snippet -*- # name: quickmark # key: qm # -- [${1:Name}](${1:$(quickmarks-get yas-text)}) $0 and for org-mode:\n1 2 3 4 5 # -*- mode: snippet -*- # name: quickmark # key: qm # -- [[${1:$(quickmarks-get yas-text)}][${1:Name}]] $0 Using the snippet in action Epilogue All of the code used in this post can also be found in my dotfiles repository.\nI hope you found it useful!\n","date":1536162720,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"978c8674b1146ff301956f766aa2dbbc","permalink":"https://iocanel.com/2018/09/quickmarks/","publishdate":"2018-09-05T18:52:00+03:00","relpermalink":"/2018/09/quickmarks/","section":"post","summary":"Overview This is a small post that describes how I made authoring markdown, org-mode etc easier by using snippets that help me handle links like a pro.\nPrologue I am a heavy user of org-mode. I use it for taking notes, writing blogs, presentations and so on. As a software developer I often use markdown too. In both cases at some point I have to deal with links.\nEmbarrassingly enough, I used to rely on my browsers bookmarks to handle links, so my workflow looked a little like:\n","tags":["emacs","org-mode"],"title":"Quickmarks","type":"post"},{"authors":null,"categories":["hints"],"content":"Prologue Every now then I see on social media people sharing the same old story: “Using shell scripting to workaround the limitations of their DevOps tools”. I’ve done it, my colleagues are doing it and most likely you have done it yourself.\nSo it seems that shell scripting is used to do the dirty work, yet its often considered by many the last resort. If you search on the web about popular ‘DevOps’ tools and skills, you’ll probably find:\nJenkins Ansible Git …Which are all awesome tools, but you won’t find shell scripting, ever wondered why?\nHere are a couple of thoughts that come into my mind:\nToo obvious? There are syntactical differences among different Unix systems. There is no good way in bundling and sharing shell code. For me the biggest issue I have with shell scripts is that I often find myself creating the same boilerplate code again and again, cause there isn’t an easy way to ‘import’ that code. So, I created a small pet project that intends to make sharing shell code a treat. The tools is called Grab.\nIntroducing: grab The last couple of years I played a lot with Jenkins pipeline libraries. I really enjoyed the fact that with the use of a simple annotation one could automatically fetch and reuse groovy code from github. The approach was similar to how golang’s go get fetches dependencies and it does work pretty well. What I didn’t like about Jenkins pipeline, is that I had to run them inside Jenkins, which in many cases was an overkill. Also, despite the fact that groovy is groovy, its not as popular as shell itself (it might be unappreciated, but its definitely a more common skill than groovy). So, bringing that kind of experience to shell scripting was the main motivation behind grab.\nUsing grab The shell itself does allow us to `source` a script as long as that script is present in the file system. Grab’s mission is to get the script from the internet to the file system, so that it can be easily sourced.\n1 source $(grab github.com/shellib/cli) The command above will perform the following steps:\ncheck under $HOME/.shellib for the file: github.com/shellib/cli/library.sh. download the file if missing. return the path so that it can be used by source. Note: That by default grab will look for a `library.sh` file (see below how you can change that).\nRequesting a custom file from a repo. In order to support repositories that doesn’t conform to the convention described above (having a library.sh in the root of the repository) or to support repositories with more than shell scripts, grab support specifying the library file explicitly:\n1 source $(grab github.com/someorg/somerepo/somedir/somefile) Versions Each shell library repository may have tags. Grab allows the user to refer the tag, by appending the @ symbol followed by the tag:\n1 source $(grab github.com/shellib/cli@1.0) Aliasing The more scripts you grab, the more likely is to get into naming clashes. For example its likely two grabbed scripts to contain a function with the same name. This is something that one can encounter in most modern programming languages when importing, requiring etc. One common solution is to use an alias for the imported package. A similar technique has been added to this tool, that allows you to grab a library using a special alias, using the `as` keyword.\n1 source $(grab \u0026lt;git repository\u0026gt; as \u0026lt;alias\u0026gt;) Then your code will be able to access all the functions provided by the library using the `\u0026lt;alias\u0026gt;::` prefix. Here’s a real example:\nsource $(grab github.com/shellib/cli as cli)\nIt’s important to clarify that the `::` has no spacial meaning or use in shell scripts, its just a separator that is used to separate the alias from the function name. This is something that was inspired by my friend and co-worker Roland Huss, who uses that separator to scope functions.\nLibraries Under the shellib organization on github I’ve also created a couple of libraries:\ncli (common cli utilities for handling arguments and flags). wait (shell utilities for waiting until a condition is meet). maven (functions for handling maven releases). kubernetes (work in progress library with kubernetes functions). Writing reusable shell libraries It is really trivial to write a reusable shell library that will be compliant with grab. All you need to do is to create a script that encapsulate its reusable pieces inside functions. That script needs to be called `library.sh` and be placed at the root of the repository.\nAlso that script needs to be `source` friendly and that means that it shouldn’t execute any code when sourced (unless of course special initialization is required). A simple trick to have a pieces of code only executed when not sourced is the following:\n1 2 3 4 if [ \u0026#34;${BASH_SOURCE[0]}\u0026#34; == \u0026#34;${0}\u0026#34; ] || [ \u0026#34;${BASH_SOURCE[0]}\u0026#34; == \u0026#34;\u0026#34; ]; then # # Code to execute when not sourced goes here... fi Epilogue I hope you find it as useful as I did. If nothing else, it allows you me to organize my shell scripts into reusable bits and push them to a git repo, …","date":1531076280,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"423f9e51e24688a79091fbbc952eeb5a","permalink":"https://iocanel.com/2018/07/reusing-shell-libraries/","publishdate":"2018-07-08T21:58:00+03:00","relpermalink":"/2018/07/reusing-shell-libraries/","section":"post","summary":"Prologue Every now then I see on social media people sharing the same old story: “Using shell scripting to workaround the limitations of their DevOps tools”. I’ve done it, my colleagues are doing it and most likely you have done it yourself.\nSo it seems that shell scripting is used to do the dirty work, yet its often considered by many the last resort. If you search on the web about popular ‘DevOps’ tools and skills, you’ll probably find:\n","tags":["shell"],"title":"Reusing shell libraries","type":"post"},{"authors":null,"categories":["hints"],"content":"Prologue Lately I keep hearing about “how much software development has changed over the last half of the decade”. This usually refers to the adoption of containers, cloud etc. I would like to focus on an other factor of the change and that is the plethora of development related systems and services.\nSo its typical for a team to have:\nversion control code review systems code analysis systems project management issue trackers continuous integration chat / messaging Add email to that and you realize that most of development related tasks now days take place in the browser. Unfortunately, browsers by nature are unaware of the content they serve, so its not trivial to automate your workflow in the browser. So, if the browser is not going to play the role of ‘Swiss army knife’ for development then what?\nOne could put all hopes in modern IDEs, however IDEs tend to be specialized more on language features and less about integration with external systems and services. The later is usually a space where general purpose editors are better. And that is mostly because the have a wider and more uniform audience. On the other hand these editors are not so rich in language related features.\nSo a big question is “Will editors like Atom, Emacs or Visual Studio Code ever be competitive to traditional IDEs for writing code?” At least for Java developers, this used to be a “not viable option”.\nThis is something that is going to change, due to the `Language Server Protocol`.\nWhat is the language server protocol? LSP is a standardized protocol for how “language servers” and “development tools” communicate. A language server implements all the language specific operations once and then different development tools can connect to it and get the functionality for free.\nSo, if I create a new language say “AwesomeScript”, instead of creating support for all editors out there, I’ll just need to implement the language server once. Then I would just need to write just a little bit of code for each editor (if any at all), to hook to the “language server”.\nThe history of language servers Traditional editors like Emacs or vim have been using language servers for a while now. Some examples that come to mind are:\nEnsime Eclim Meghanada There was no standard protocol at that time (though there were visible similarities). The effort of standardization was initiated by Microsoft, as they starting implementing one server after the other for the needs of visual studio code. More details on LSP history.\nLanguage Server Protocol and Java I’ve been writing Java for more than 15 years now. For nearly a decade I’ve been exclusively using Intellij for Java development. For everything else my goto editor has been Emacs (at least the last year or so). So, it made sense for me to experiment with LSP on Emacs and see how far can I get.\nEclipse Java Language Server For Java the most popular implementation of the protocol is Eclipse Java Language Server. At this point I have to clarify that I’ve never managed to productively use the Eclipse IDE. It always felt that it required a lot of manual configuration and tuning for things (e.g. maven support, apt and more) that other alternatives provided out of the box.\nThe initial experience exceeded my expectations.\nIt was a lot faster than Eclim. Navigating the code was working great. Functionality provided by lsp-ui worked surprisingly well (sideline, doc etc). It gave me access to some simple refactoring tasks(see the image above). Unfortunately, sooner or later the experience became way too Eclipsy for my taste. What do I mean by that? When added things like annotation processors etc into the mix, I started having issues that my “Google Fu” wasn’t able to overcome in the little time I had … … or I am just not used in the “Eclipse” way of things and didn’t want to put the extra effort … … or a little bit of both?\nIntelliJ Language Server Here’s where LSP becomes really interesting…\nI recently bumped into a project called Intellij LSP Server, that actually provided an Intellij plugin that exposed the IDEs capabilities through LSP. This made it possible to use it as a drop in replacement of Eclipse Java Language Server. So, I could get all the lsp-ui related stuff for free but with added Intellij coolness.\nIt was a blast! It provided everything that Eclipse Java Language Server did, but with:\nway better completion that was in par with Intellij. Better “run project” functionality. Here’s how completion looks:\nAnd here’s how the lsp-ui stuff work with Intellij LSP Server:\nThe only downside for this project is that its still in alpha state and in many cases the server dies or blocks for user input.\nHere’s an example:\nFile Cache Conflict When Intellij detects that a file has been externally modified it opens up a pop-up that prompts the user to select if it should reload from disk or use the cached version. When this dialog gets popped all the LSP operations are blocked. As there seems no obvious way to disable it I …","date":1529510400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"3b6c3e2da6d25aba05649fb7a93aba56","permalink":"https://iocanel.com/2018/06/language-server-protocol-java-and-emacs/","publishdate":"2018-06-20T19:00:00+03:00","relpermalink":"/2018/06/language-server-protocol-java-and-emacs/","section":"post","summary":"Prologue Lately I keep hearing about “how much software development has changed over the last half of the decade”. This usually refers to the adoption of containers, cloud etc. I would like to focus on an other factor of the change and that is the plethora of development related systems and services.\nSo its typical for a team to have:\nversion control code review systems code analysis systems project management issue trackers continuous integration chat / messaging Add email to that and you realize that most of development related tasks now days take place in the browser. Unfortunately, browsers by nature are unaware of the content they serve, so its not trivial to automate your workflow in the browser. So, if the browser is not going to play the role of ‘Swiss army knife’ for development then what?\n","tags":["java","lsp","emacs"],"title":"Language Server Protocol, Java and Emacs","type":"post"},{"authors":null,"categories":["devops"],"content":"Intro During the summer I had the chance to play a little bit with Jenkins inside Kubernetes. More specifically I wanted to see what’s the best way to get the Docker Workflow Plugin running. So, the idea was to have a Pod running Jenkins and use it to run builds that are defined using Docker Workflow Plugin. After a lot of reading and a lot more experimenting I found out that there are many ways of doing this, with different pros and different cons each. This post goes through all the available options. More specifically:\nBuilds running directly on Master Using the Docker Plugin to start Slaves Using the Docker Plugin and Docker in Docker Using Swarm clients Swarm with Docker in Docker Before I go through all the possible setups, I think that it might be helpful to describe what are all these plugins.\nDocker Plugin A Jenkins plugin that is using Docker in order to create and use slaves. It uses http in order to communicate with Docker and create new containers. These containers only need to be java ready and also run SSHD, so that the master can ssh into them and do its magic. There are a lot of images for slave containers over the internet, the most popular at the time of my reattach was the evarga jenkins slave. The plugin is usable but feels a little bit flaky, as it creates the Docker container but sometimes it fails to connect to the slave and retries (it usually takes 2 to 3 attempts). Tried with many different slave images and many different authentication methods (password, key auth etc) with similar experiences. Swarm Having a plugin to create the slave is one approach. The other is “Bring your own slaves” and this is pretty much what swarm is all about. The idea is that the Jenkins master is running the Swarm plugin and the users are responsible for starting the swarm clients (its just a java process). java -jar /path/to/swarm-client.jar http://jenkins.master:8080 view rawgistfile1.txt hosted with by GitHub The client connects to the master and let’s it know that it is up and running. Then the master is able to start builds on the client.\nDocker Workflow Plugin This plugin allows you to use Docker images and containers in workflow scripts, or in other words execute workflow steps inside Docker containers \u0026amp; create Docker from workflow scripts. Why?\nTo encapsulate all the requirements of your build in a Docker image and not worry on how to install and configure them. Here’s how an example Docker Workflow script looks like:\n1 2 3 4 5 6 node(\u0026#39;docker\u0026#39;) { docker.image(\u0026#39;maven\u0026#39;).inside { git \u0026#39;https://github.com/fabric8io/example-camel-cdi\u0026#39; sh \u0026#39;mvn clean install\u0026#39; } } Note: You don’t need to use the Docker Plugin to you the Docker Workflow Plugin. Also: The Docker Workflow Plugin is using the Docker binary. This means that you need to have the docker client installed wherever you intend to use the Docker Workflow Plugin. Almost forgot: The “executor” of the build and the containers that participate in the workflow, need to share the project workspace. I won’t go into details, right now. Just keep in mind that it usually requires access to specific paths on the docker host (or some short of shared filesystem). Failure to satisfy this requirements leads to “hard to detect” issues like builds hunging forever etc.\nNow we are ready to see what are the possible setups.\nNo slaves This is the simplest approach. It doesn’t involve Jenkins slaves, the builds run directly on the master by configuring a fixed pool of executors. Since there are no slaves, the container that runs Jenkins itself will need to have the Docker binary installed and configured to point to the actual Docker host.\nHow to use the docker host inside Kubernetes?\nThere are two approaches:\nUsing the Kubernetes API By mounting /var/run/docker.sock You can do (1) by using a simple shell script like the one below.\n1 2 3 4 5 #!/bin/bash KUBERNETES=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token` POD=`hostname` curl -s -k -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; $KUBERNETES/api/v1/namespaces/$KUBERNETES_NAMESPACE/pods/$POD | grep -i hostIp | cut -d \u0026#34;\\\u0026#34;\u0026#34; -f 4 You can (2) by specifying a hostDir volume mount on Jenkins POD.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 { \u0026#34;volumeMounts\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;docker-socket\u0026#34;, \u0026#34;mountPath\u0026#34;: \u0026#34;/var/run/docker.sock\u0026#34;, \u0026#34;readOnly\u0026#34;: false } ], \u0026#34;volumes\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;docker-socket\u0026#34;, \u0026#34;hostPath\u0026#34;: { \u0026#34;path\u0026#34;: \u0026#34;/var/run/docker.sock\u0026#34; } } ] } An actual example of such setup can be found here.\nPros: Simplest possible approach Minimal number of plugins Cons: Doesn’t scale Direct access to the Docker daemon Requires access to specific paths on the host (see notes on Docker Workflow Plugin) Docker Plugin managed Slaves The previous approach doesn’t scale for the obvious reasons. Since, Docker and Kubernetes are already in place, it sounds like a good idea to use them as a pool of resources. So we can add Docker Plugin and have it create a slave container for each build we …","date":1519849440,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"6a323367ece6073bd5192cb461d11cce","permalink":"https://iocanel.com/2018/02/jenkins-setups-for-kubernetes-and-docker-workflow/","publishdate":"2018-02-28T22:24:00+02:00","relpermalink":"/2018/02/jenkins-setups-for-kubernetes-and-docker-workflow/","section":"post","summary":"Intro During the summer I had the chance to play a little bit with Jenkins inside Kubernetes. More specifically I wanted to see what’s the best way to get the Docker Workflow Plugin running. So, the idea was to have a Pod running Jenkins and use it to run builds that are defined using Docker Workflow Plugin. After a lot of reading and a lot more experimenting I found out that there are many ways of doing this, with different pros and different cons each. This post goes through all the available options. More specifically:\n","tags":["jenkins","kubernetes","docker"],"title":"Jenkins setups for Kubernetes and Docker Workflow","type":"post"},{"authors":null,"categories":["hints"],"content":"intro openshift takes security seriously. Sometimes more seriously than I’d like (mostly cause I am lazy). One such example is the fact that containers run using arbitrary users. This is done as an extra measure to control damages, should a process somehow escapes its container boundaries.\nBut how does it affect users?\nthe problem Users need to follow certain guidelines when creating container images.\ndon’t assume a user you don’t have a known uid The uid of the user is not known in advnace. Also there is no way of controlling it.\nyou don’t have a prefixed username The same applies to the username (regardless of what’s in your Dockerfile). Even though the `whoami` command seems to always return `default`, I am not sure if this is something you can rely on.\nyou don’t have a home Executing command that rely on the $HOME environment variable, might not work as expected.\nexamples where this becomes a problem:\ngit The git binary complaints when there is no entry of the user inside the /etc/passwd file. Using an arbitrary user id, means that there will be no entry there and thus the git binary will refuse to work.\nmaven Maven picks up custom user settings by looking up for a settings.xml under ~/.m2/settings.xml?\nWhere does ~ point? Exactly!\na solution All of the above stem from the fact that the user is not present in /etc/passwd. So the recommended approach is to use the nsswrapper library in order to use a custom passwd file on runtime.\nDetails of the approach can be found in openshift’s guidelines for creating image.\nThe basic idea is that you install and load the libnsswrapper.so and then using environment variables you point to a custom `passwd` and `group`. These files are generated on runtime (where u know the uid and can now generate an entry for the passwd). So the steps are:\nuse the uid to generate a passwd use the NSS_WRAPPER_PASSWD to point to the generated passwd use the NSS_WRAPPER_GROUP to point to a the generated group Note: The `NSS_WRAPPER_GROUP` environment variable is required. If you don’t have a use for a custom group file, point it to the original one.\nusing init containers The problem with the approach described in the previous section is that the nswrapper library needs to be added to each image that is affected by it. And if you are lazy like me, you are probably not going to like it.\nSo, here’s a possibly controversial (as in hacky) approach I come up with, so that I can limit the amount of effort I need to put into it.\ncomposition vs inheritance Instead of creating a new version of docker image that contains the nsswrapper library for each of the affected images, I decided to try and create: `One image to wrap them all`.\nIn openshift, a pod may contain multiple containers and these containers can share file system (both regular and init containers). So, it is absolutely possible to have an init container copy a library to the shared file system, so that a regular container can pickup an use. And since all of the nsswrapper container handling is done via environment variables (which can be defined in the pod), it can be completely transparent to the target container.\nSo, `the one image` (that will be used as init container) will contain the `libnsswrapper.so` and a helper script that will:\ncopy that file to a shared file system. generate the passwd (and optionally the group file) copy the generated passwd to the shared filesystem The script below, does the all of the above, with the use of a passwd template:\n1 2 3 4 5 6 7 #!/bin/bash export USER_ID=`id -u` export GROUP_ID=`id -g` cp /usr/lib64/libnss_wrapper.so ${SHARED_DIR}/libnss_wrapper.so envsubst \u0026lt; /usr/local/share/passwd.template \u0026gt; ${SHARED_DIR}/generated.passwd The template is used to render the passwd file using environment variables. In the end both the generated file and library are copied to the shared file system. The template could look like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:99:99:Nobody:/:/sbin/nologin ${USER_NAME}:x:${USER_ID}:${GROUP_ID}:${USER_DESCRIPTION}:${USER_HOME}:/bin/bash A working version of this concept can be found here: https://github.com/syndesisio/nsswrapper.\nSo, the only things that remains is to specify:\nthe shared filesystem and the environment variables the environment variables so that the target container can make use of the resources. defining a shared volume To define a shared file system for all containers of a pod is as simple as defining an `emptyDir` volume:\n1 2 3 volumes: - emptyDir: {} name: shared-volume mounting the …","date":1506632400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"2ea45fb9f2587a07b921c1458c8dbf77","permalink":"https://iocanel.com/2017/09/using-init-containers-to-handle-openshifts-arbitrary-user-ids/","publishdate":"2017-09-29T00:00:00+03:00","relpermalink":"/2017/09/using-init-containers-to-handle-openshifts-arbitrary-user-ids/","section":"post","summary":"intro openshift takes security seriously. Sometimes more seriously than I’d like (mostly cause I am lazy). One such example is the fact that containers run using arbitrary users. This is done as an extra measure to control damages, should a process somehow escapes its container boundaries.\nBut how does it affect users?\nthe problem Users need to follow certain guidelines when creating container images.\ndon’t assume a user you don’t have a known uid The uid of the user is not known in advnace. Also there is no way of controlling it.\n","tags":["openshift"],"title":"Using init containers to handle Openshift’s arbitrary user ids","type":"post"},{"authors":null,"categories":["hints"],"content":"Prologue Yesterday I was having a talk with Adrian Cole and during our talk he had an unpleasant surprise. He found out that he forgot a node running on his Amazon EC2 for a couple of days and that it would cost him a several bucks.\nThis morning I was thinking about his problem and I was thinking of ways that would help you avoid situations like this.\nMy idea was to build a simple project that would notify you of your running nodes in the cloud via email at a given interval.\nThis post is about building such as solution with Apache Camel, which help you integrate very easily with both your cloud provider and of course your email:%. The full story and the sources of this project can be found below.\nWorking with recurring tasks Apache Camel provides a quartz component, which will allow you schedule a task with a given interval. It is really simple to use. In our case a one hour interval sounds great. Also we want an unlimited time of executions (repeatCount=-1) so it could be something like this.\nUsing Camel to integrate to your Cloud provider Apache Camel 2.9.0 will provide a jclouds component, which will allow you to use jclouds, to integrate with most cloud key/value engines \u0026amp; compute services. I am going to use this component, to connect to my cloud provider (I will use my EC2 account, but it would work with most cloud providers)\nMy first task is to create a jclouds compute service and pass it to the camel-jclouds component. This will allow me to use jclouds inside my camel routes.\nTo avoid providing my real credentials I’ve used property place holders and keep the real credentials in a properties file.\nNow that the component is configured I am ready to define my route. The route will use Camel jclouds compute producer to send a request to my cloud provider and ask how many nodes are currently running. This query can be further enhanced with other parameters such as group (get me all the running nodes of group X) or even image (get me all the running nodes of group X that use image Y).\nAll I have to do is add the following element to my route.\nThe out message will contain a set in each body with all the metada of the running nodes.\nFiltering the results I don’t want to fire an email every time I ask my cloud provider about the running nodes, but only when there is actually a running node. The best way to do so is to use the Message Filter EIP pattern. I am going to use that in order to filter out all messages that have a body which contains an empty set.\nSending the email This is the easiest part, since the only thing I need to specify are the sender, the target \u0026amp; the subject of the email. I can do it simply but adding headers to the message. Finally I need to specify the smtp server and the credentials required for using it.\nNow all we need to do is set the destination endpoint inside the message filer.\nRunning the example The full source of this example can be found at github. The project is called cloud notifier. You will have to edit the property file camel.properties in order to add the credentials for your cloud provider and email account. In order to run it all you need to do is type mvn camel:run.\nIf you have a couple of nodes running, the result will look like this.\nFigure 1: Received notifications\nThe source of the project can be found here: sources.\nEnjoy!\nConclusions The camel-jclouds component is really new, it will be part of 2.9.0 releasem however it already provides some really cool features. It also provides the ability to create/destroy or run scripts on your nodes from camel routes. Also it leverages jclouds blobstore API in order to integrate with cloud provider key value engines (e.g. Amazon S3) Can you imagine executing commands in the cloud using your mobile phone and sms message? (Camel also supports protocols for exchanging sms).\nI hope you find all these really useful.\nEdit: While I was writing this simple app, to my surprise I found out a forgotten instance myself!\n","date":1320444e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"9353eec778dd161146f1f31aabf4d2af","permalink":"https://iocanel.com/2011/11/cloud-notifications-with-apache-camel-and-jclouds/","publishdate":"2011-11-05T00:00:00+02:00","relpermalink":"/2011/11/cloud-notifications-with-apache-camel-and-jclouds/","section":"post","summary":"Prologue Yesterday I was having a talk with Adrian Cole and during our talk he had an unpleasant surprise. He found out that he forgot a node running on his Amazon EC2 for a couple of days and that it would cost him a several bucks.\nThis morning I was thinking about his problem and I was thinking of ways that would help you avoid situations like this.\nMy idea was to build a simple project that would notify you of your running nodes in the cloud via email at a given interval.\n","tags":["java","camel","jcoulds"],"title":"Cloud notifications with Apache Camel and jclouds","type":"post"},{"authors":null,"categories":["personal","conference","presentation"],"content":"Prologue I am currently returning home from JavaOne 2011. I am at the airport of Munich waiting for my connecting flight to Athens. Once again the flight my flight is delayed and its a great chance to blog a bit about JavaOne.\nApache Karaf Cellar at JavaOne 2011 I had the chance to make a BOF about Karaf Cellar last Tuesday night. Even though the presentation was really late (20:30) and there were a lot of parties going on at this time (actually I was at the Jboss party right before my presentation) there were quite a few people that attended. The best part was that most of the people who attended were really eager to hear about Karaf \u0026amp; Cellar and I received a lot of great “straight to the point” questions. So I really enjoyed the talk and had a lot of fun.\nI was worried that I would be really nervous, since I am not that used at public speaking, but I think the drinks I had in the Jboss party did the trick.\nAfter the talk Right after the talk I had the chance to have a few more drinks with Marios Trivizas, Chris Soulios, Adrian Cole, Chas Emerick \u0026amp; Toni Batchelli.\nThe FuseSource Booth Apart from talking and attending other sessions at JavaOne, I also had the chance to spent a lot of time at the booth of FuseSource. Great chance to meet with people enjoying our services and also to talk with people interested in learning more about FuseSource Products \u0026amp; FuseSource success stories.\n“The is no place like home” “There is no place like home” Well, actually there is and is called San Francisco, but now I am back home \u0026amp; ready to dive into open source. I hope I’ll have the chance to be there next year.\n","date":1318194e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"ace520e37a68a8b7fdf42f4215448e55","permalink":"https://iocanel.com/2011/10/my-javaone-talk-about-cellar/","publishdate":"2011-10-10T00:00:00+03:00","relpermalink":"/2011/10/my-javaone-talk-about-cellar/","section":"post","summary":"Prologue I am currently returning home from JavaOne 2011. I am at the airport of Munich waiting for my connecting flight to Athens. Once again the flight my flight is delayed and its a great chance to blog a bit about JavaOne.\nApache Karaf Cellar at JavaOne 2011 I had the chance to make a BOF about Karaf Cellar last Tuesday night. Even though the presentation was really late (20:30) and there were a lot of parties going on at this time (actually I was at the Jboss party right before my presentation) there were quite a few people that attended. The best part was that most of the people who attended were really eager to hear about Karaf \u0026 Cellar and I received a lot of great “straight to the point” questions. So I really enjoyed the talk and had a lot of fun.\n","tags":["java","javaone","osgi","karaf"],"title":"My JavaOne talk about Cellar","type":"post"},{"authors":null,"categories":["development"],"content":"Prologue In some previous blog post, I designed and implemented Cellar (a small clustering engine for Apache Karaf powered by Hazelcast). Since then Cellar grew in features and eventually was accepted inside Karaf as a subproject.\nThis post will provide a brief description of Cellar as it is today.\nCellar Overview Cellar is designed so that it can provide Karaf the following high level features\nDiscovery Multicast Unicast Cluster Group Management Node Grouping Distributed Configuration Admin per Group distributed configuration data event driven distributed / local bridge Distributed Features Service per Group distributed features/repos info event driven distributed / local bridge Provisioning Tools Shell commands for cluster provisioning The core concept behind cellar is that each node can be a part of one ore more groups, that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest group members.\nEach group comes with a configuration, which defines which events are to be broadcasted and which are not. Whenever a local change occurs to a node, the node will read the setup information of all the groups that it belongs to and broadcast the event to the groups that whitelist the specific event. The broadcast operation is happening via the distributed topic provided by the group. For the groups that the broadcast is supported, the distributed configuration data will be updated so that nodes that join in the future can pickup the change.\nSupported Events There are 3 types of events:\nConfiguration change event Features repository added/removed event. Features installed/unistalled event. For each of the event types above a group may be configured to enabled synchronization, and to provide a whitelist / blacklist of specific event ids.\nExample: The default group is configured allow synchronization of configuration. This means that whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed memory of the default group and will also be broadcasted to all other default group members using the topic. This is happening for all PIDs but org.apache.karaf.cellar.node which is marked as blacklisted and will never be written or read from the distributed memory, nor will broadcasted via the topic. Should the user decide, he can add/remove any PID he wishes to the whitelist/blacklist.\nSyncing vs Provisioning Syncing (changing stuff to one node and broadcast the event to all other nodes of the group) is one way of managing the cellar cluster, but its not the only way. Cellar also provides a lot of provisioning capabilities. It provides tools (mostly via command line), which allow the user to build a detailed profile (configuration and features) for each group.\nCellar in Action To see how all of the things described so far in action, you can have a look at the following 5 minute cellar demo: Note: The video was shoot before Cellar adoption by Karaf, so the feature url, configuration PIDs are out of date, but the core functionality is fine.\nI hope you enjoy it!\n","date":1304715600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"74c803b368cd6aef8327d12a5c487fe3","permalink":"https://iocanel.com/2011/05/apache-karaf-cellar/","publishdate":"2011-05-07T00:00:00+03:00","relpermalink":"/2011/05/apache-karaf-cellar/","section":"post","summary":"Prologue In some previous blog post, I designed and implemented Cellar (a small clustering engine for Apache Karaf powered by Hazelcast). Since then Cellar grew in features and eventually was accepted inside Karaf as a subproject.\nThis post will provide a brief description of Cellar as it is today.\nCellar Overview Cellar is designed so that it can provide Karaf the following high level features\nDiscovery Multicast Unicast Cluster Group Management Node Grouping Distributed Configuration Admin per Group distributed configuration data event driven distributed / local bridge Distributed Features Service per Group distributed features/repos info event driven distributed / local bridge Provisioning Tools Shell commands for cluster provisioning The core concept behind cellar is that each node can be a part of one ore more groups, that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest group members.\n","tags":["java","osgi","karaf"],"title":"Apache Karaf Cellar","type":"post"},{"authors":null,"categories":["development"],"content":"OSGi in the clouds The last couple of years OSGi and Cloud Computing are two buzz words, that you don’t see go hand in hand that often. JClouds is going to change that, since 1.0.0 release is OSGi ready and it also provide direct integration with Apache Karaf.\njclouds in the Karaf The last couple of weeks I have been working with the jclouds team in order to improve the OSGification of jclouds and also to provide integration with Apache Karaf. I will not go into much detail in this post, since there is a [[wiki. I will add however a small demo that shows how easy it is. A cloud, a Karaf and a Camel The fact that JClouds is now OSGi ready opens up new horizons. Apache Camel is one of them. I have been working on a Camel Component that leverages JClouds blobstore abstraction, in order to provide blobstore consumers and producers via Apache Camel.\nHopefully, abstractions for Queues and Tables will follow…\nYou can find it and give it a try on my github repository.\n","date":1304715600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"f7ca1209aca1a4bfde544b92c105434d","permalink":"https://iocanel.com/2011/05/jclouds-osgi/","publishdate":"2011-05-07T00:00:00+03:00","relpermalink":"/2011/05/jclouds-osgi/","section":"post","summary":"OSGi in the clouds The last couple of years OSGi and Cloud Computing are two buzz words, that you don’t see go hand in hand that often. JClouds is going to change that, since 1.0.0 release is OSGi ready and it also provide direct integration with Apache Karaf.\njclouds in the Karaf The last couple of weeks I have been working with the jclouds team in order to improve the OSGification of jclouds and also to provide integration with Apache Karaf. I will not go into much detail in this post, since there is a [[wiki. I will add however a small demo that shows how easy it is. ","tags":["java","osgi","karaf"],"title":"jclouds \u0026 OSGi","type":"post"},{"authors":null,"categories":["development","presentations"],"content":" Presented on OSGi and Apache Karaf on Java Hellenic User Group.\nIt was a great event with very interesting presentations. The full list of presentations can be found here.\nRegarding my presentation, I was a bit nervous at first, since I hadn’t practiced my “presentation” skills for a while, but things got better as time went by. I’ve had the chance to meet a lot of interesting people and discuss about OSGi, Apache Karaf \u0026amp; Apache ServiceMix. The slides of the presentation can be found at: Slide Share.\nApache Karaf Demonstration Due to time constraints and the extended introduction to OSGi (as the community requested) I didn’t have the chance to provide a proper Karaf demonstration. However, I made a demo video which I hope to fill the gap. The video can be found at Slide Share Karaf Demo or you can download it from my Google Site.\nI hope you enjoy both the presentation and the video.\n","date":1303160400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"6b3415255bc50b8ba4034ddb2e3f1520","permalink":"https://iocanel.com/2011/04/introduction-to-osgi-and-karaf-at-jhug/","publishdate":"2011-04-19T00:00:00+03:00","relpermalink":"/2011/04/introduction-to-osgi-and-karaf-at-jhug/","section":"post","summary":" Presented on OSGi and Apache Karaf on Java Hellenic User Group.\nIt was a great event with very interesting presentations. The full list of presentations can be found here.\nRegarding my presentation, I was a bit nervous at first, since I hadn’t practiced my “presentation” skills for a while, but things got better as time went by. I’ve had the chance to meet a lot of interesting people and discuss about OSGi, Apache Karaf \u0026 Apache ServiceMix. The slides of the presentation can be found at: Slide Share.\n","tags":["java","osgi","karaf"],"title":"Introduction to OSGi and Karaf at JHUG","type":"post"},{"authors":null,"categories":["development"],"content":"EDIT: The project “cellar” has been upgraded with a lot new features, which are not described by this post. A new post will be added soon.\nPrologue I have been playing a lot with Hazelcast lately, especially pairing it with Karaf. If you haven’t done already you can read my previous post on using Hazelcast on Karaf.\nIn this post I am going to take things one step further and use Hazelcast to build a simple clustering engine on Karaf.\nThe engine that I am going to build will have the following features:\nZero configuration clustering Node discover each other with no config Configuration Replication muslticasting configuration change events configurable blacklist/whitelist by PID lifecycel support (can be enabled/disabled using shell) Features Repository \u0026amp; State replication multicasting repository events (add url and remove url). multicasting features state events. configurable blacklist / whitelist by feature. lifecycle support (can be enabled/disabled using shell). Clustering management distributed command pattern implementation. monitoring and management commands. Architecture The idea behind the clustering engine is that for each unit that we want to replicate, we create an event, broadcast the event to the cluster and hold the unit state to a shared resource, so that the rest of the nodes can look up and retrieve the changes.\nExample: We want all nodes in our cluster to share configuration for PIDs a.b.c and x.y.z. On node “Karaf A” a change occurs on a.b.c. “Karaf A” updates the shared repository data for a.b.c and then notifies the rest of the nodes that a.b.c has changed. Each node looks up the shared repository and retrieves changes.\nThe role of Hazelcast The architecture as described so far could be implemented using a database/shared filesystem as a shared resource and polling instead of multicasting events. So why use Hazelcast? Hazelcast fits in perfectly because it offers:\nAuto discovery Cluster nodes can discover each other automatically. No configuration is required. No single point of failure No server or master is required for clustering. The shared resource is distributed, hence we introduce no single point of failure. Provides distributed topics Using in memory distributed topics allows us to broadcast events / commands the are valuable for management and monitoring. The implementation For implementing all the above we have the following entities:\nOSGi Listener An interface the implements a listener for specific OSGi events (e.g. ConfigurationListener)\nEvent The object that contains all the required information required to describe the event (e.g. PID changed).\nEvent Topic The distributed topic use to broadcast events. It is common for all event types.\nShared Map The distributed collection the serves as shared resource. We use one per event type.\nEvent Handler The processor the processes remote event received through the topic.\nEvent Dispatcher The unit the decides which event should be processed by which event handlers.\nCommand A special type of event that is linked to a list of events that represent the outcome of the command.\nResult A special type of event that represents the outcome of a command. Commands and results are correlated.\nThe OSGi spec in a lot of situations describe Events and Listener (e.g. ConfigurationChangeEvent and ConfigurationListener).By implementing such Listener and expose it as an OSGi service to the Service Registry I make sure that we “listen” to the events of interest.\nWhen the listener is notified of an event it forwards the Event object to a Hazel cast distributed topic. To keep things as simple as possible I keep a single topic for all event types. Each node has a listener registered on that topic and gets sends all events to the event Dispatcher.\nThe Event Dispathcer when receives an event it looks up an internal registry (in our case the OSGi Service Registry), in order to find and Event Handler that can handle the received Event. If a handler is found then it receives the event and processes it.\nBroadcasting commands Commands are a special kind of events. They imply that when they are handled a Result event will be fired, that will contain the outcome of the command. So for each command we have one result per recipient. Each command contains a unique id (unique foe all cluster nodes, create from Hazelcast). This id is used to correlate the request with the result. For each result successfully correlated the result is added to list of results on the command object. If the list gets full or if 10 seconds from the command execution have elapsed, the list is moved to a blocking queue from which the result can be retrieved.\nThe snippet below shows what happens when a command is sent for execution.\nUsing the source I created a small project that demonstrates all of the functionality described above and have uploaded it to github, so that I can share it with you, receive feedback and discuss about it. The project is called cellar. I couldn’t find a more …","date":1299794400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"7c53773ff32decc94006bd3839761d00","permalink":"https://iocanel.com/2011/03/karaf-clustering-using-hazelcast/","publishdate":"2011-03-11T00:00:00+02:00","relpermalink":"/2011/03/karaf-clustering-using-hazelcast/","section":"post","summary":"EDIT: The project “cellar” has been upgraded with a lot new features, which are not described by this post. A new post will be added soon.\nPrologue I have been playing a lot with Hazelcast lately, especially pairing it with Karaf. If you haven’t done already you can read my previous post on using Hazelcast on Karaf.\nIn this post I am going to take things one step further and use Hazelcast to build a simple clustering engine on Karaf.\n","tags":["java","osgi","karaf","hazecast"],"title":"Karaf clustering using Hazelcast","type":"post"},{"authors":null,"categories":["development"],"content":"Prologue The last months Hazelcast caught my attention. I first saw the JIRA of the camel-hazelcast component, then I read about it, I run some examples and eventually I fell in love with it.\nIf you are not already familiar with it, Hazelcast is an opensource clustering platform, which provdies a lot of features such as:\nAuto discovery Distributed Collection Transactions Data Partitioning You can visit the Hazelcast Documentation for more information. In this blog post I will show how to run hazelcast on Apache Karaf or Apache ServiceMix and I will provide an example application that creates a hazelcast instance, deploys the hazelcast monitoring web application and adds a couple of shell commands on Apache Karaf.\nFinally, I will create a Hazelcast Topic using blueprint and we will create a clustered echo command using that topic.\nFor all of the above I will provide the full source so that you can try it yourself.\nHazelcast \u0026amp; OSGi According to the hazelcast website hazelcast is not yet OSGi ready (it is still in the TODO list). However, I found that versions 1.9.x are ready enough to get you going. In this post I will use the current trunk of hazelcast source (1.9.3-SNAPSHOT) for which I have created a couple of patches for the web-console and for some other minor issues.\nHazelcast Instance as an OSGi service Even though that Hazelcast requires zero configuration, I found it best to create a Hazelcast instance using Spring, pass the desired configuration and finally expose the instance as an OSGi service. In the snippet above I am using a minimal configuration which only set the credentials of the Hazelcast. Hazelcast has no dependencies so the only thing required is the hazelcast bundle and the hazelcast monitoring war (if you wish to have access to the web console). From the Karaf shell you can just type:\nOnce the hazelcast and its monitoring are started, you can browse the hazelcast monitoring at http://localhost:8181/hazelcast-monitor. Which looks like the page below.\nFigure 1: Hazelcast console\nBuilding a distributed collection using the Blueprint To create a distributed collection with hazelcast all you need is an instance and a unique String identifier that will be used to uniquely identify the collection. Since we already have created an instance and exposed it as an OSGi service the rest are pretty easy:\nWe will use this distributed topic to build a distributed echo command (A command that will print messages in the console of all nodes). Now we need two simple things:\nA listener on that topic that will listen for messages and display them A shell command that will put messages to the topic. A listener could be as simple as this: This is a simple pojo that contains a topic and acts as MessageListner on that topic. For each message added to the topic this listener displays it to the standard output. We could add this pojo to our blueprint xml\nWhat’s left to be done is to create the command that will actually display the message that is added to the topic.\nPutting it them all together We can now start two karaf nodes either on the same machine or on separate machines in the same network, deploy hazelcast and its monitoring and finally deploy the instance, the topic and the commands as we did so far.\nLet’s try the command:\nFigure 2: Messaging between different karaf installations\nUsing the full source The code can be found on github at: hazelcast-on-karaf-sources. It consist of 3 maven modules:\ninstance (it contains a spring dm descriptor which creates the instance). shell (A shell module which contains a couple of hazelcast commands included the echo). feature (A feature descriptor for easier installation of the above modules and their deps). Once you build the project from the karaf shell you can run:\nFigure 3: Installing and using the command\nEnjoy!\nNote: I am planning to blog more on the subject if I have the time, so stay around.\n","date":1298844e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"d34b1ec033f8ee4d6722cd108e49a9fe","permalink":"https://iocanel.com/2011/02/hazelcast-on-karaf/","publishdate":"2011-02-28T00:00:00+02:00","relpermalink":"/2011/02/hazelcast-on-karaf/","section":"post","summary":"Prologue The last months Hazelcast caught my attention. I first saw the JIRA of the camel-hazelcast component, then I read about it, I run some examples and eventually I fell in love with it.\nIf you are not already familiar with it, Hazelcast is an opensource clustering platform, which provdies a lot of features such as:\nAuto discovery Distributed Collection Transactions Data Partitioning You can visit the Hazelcast Documentation for more information. In this blog post I will show how to run hazelcast on Apache Karaf or Apache ServiceMix and I will provide an example application that creates a hazelcast instance, deploys the hazelcast monitoring web application and adds a couple of shell commands on Apache Karaf.\n","tags":["java","osgi","karaf","hazelcast"],"title":"Hazelcast on Karaf","type":"post"},{"authors":null,"categories":null,"content":"I am currently in the middle of my Xmas vacation and I was just about to download a movie for tonight. While downloading, I checked my emails, which I haven’t really checked since Christmas Eve.\nAn invitation to join the Apache ServiceMix project as a committer was waiting for me on the top of my Inbox.\nOf course I accepted the invitation and I immediately started blogging about it… That’s a great ending for 2010 but its also a serious indication that I am going to need a time trasplant for 2011!\n","date":1293400800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"9bfc2baf4afff4e34344ce817f2eeec8","permalink":"https://iocanel.com/2010/12/servicemix-committer/","publishdate":"2010-12-27T00:00:00+02:00","relpermalink":"/2010/12/servicemix-committer/","section":"post","summary":"I am currently in the middle of my Xmas vacation and I was just about to download a movie for tonight. While downloading, I checked my emails, which I haven’t really checked since Christmas Eve.\nAn invitation to join the Apache ServiceMix project as a committer was waiting for me on the top of my Inbox.\nOf course I accepted the invitation and I immediately started blogging about it… That’s a great ending for 2010 but its also a serious indication that I am going to need a time trasplant for 2011!\n","tags":["servicemix"],"title":"ServiceMix committer","type":"post"},{"authors":null,"categories":["personal"],"content":"I just returned home from Java ONE and Oracle Develop 2010 (which was also my first ONE) and I thought that it would be a good idea to take 5 minutes and share the experience.\nIntro The city of San Francisco was awesome and I couldn’t find any other place in the world that could be best for the job. The weather, the size and the facilities where exactly what such event required. The organization was good enough and there were tons of sessions that I found exciting.\nDon’t let it “cloud” your judgement This is an alteration from a famous quote taken from “The Godfather” but its most fitting to this years Java One event. I found the excessive use of the buzz word “cloud” not only annoying but also misleading. There were tons of events, that used this buzzword to draw attention, even though there were not that related. The only thing I didn’t see was:\n1 2 Taking Sushi to the Sky: Secrets for successful cooking in the premises and in the cloud. Note: The name above resembles actual session names. I am not implying anything about a particular session.\nAnd the winnder is … Hadoop For me by far the most interesting thing I saw in Java One was Apache Hadoop. To put in a sentence: `The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing`.\nI had the luck to join two great session about hadoop.\nExtracting Real Value from Your Data with Apache Hadoop. (HOL) Hadoop vs. Relational Database: Shout-out Between a Java Guy and a Database Guy. (BOF) The second one will definitely be published so don’t miss it.\nI also liked … XSTM An other pretty interesting session I had the chance to watch was:\nSimpler and Faster Cloud Applications Using Distributed Transactional Memory. This was a session related to the open source project XSTM. Which I found so interesting, that if I could also found time, I would definitely love to work with it.\nFinal thoughts I would definitely love to join JavaOne next year too. Here are two things that I will do next year and I strongly recommend doing in such events\nDon’t go with the buzz. See the detailed description beyond the buzzwords. Don’t spent time with things you already know. A one hour length session can be a good introduction to unfamiliar areas, but I can’t see how the can “add” in an area you are already familiar with. ","date":1285362e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"8f4a76c067e2ed6bcba77c2cdb732c9d","permalink":"https://iocanel.com/2010/09/javaone-and-oracle-develop-2010/","publishdate":"2010-09-25T00:00:00+03:00","relpermalink":"/2010/09/javaone-and-oracle-develop-2010/","section":"post","summary":"I just returned home from Java ONE and Oracle Develop 2010 (which was also my first ONE) and I thought that it would be a good idea to take 5 minutes and share the experience.\nIntro The city of San Francisco was awesome and I couldn’t find any other place in the world that could be best for the job. The weather, the size and the facilities where exactly what such event required. The organization was good enough and there were tons of sessions that I found exciting.\n","tags":["java","javaone"],"title":"JavaOne and Oracle Develop 2010","type":"post"},{"authors":null,"categories":["development"],"content":"Prologue Karaf 2.1.0 has been just released! Among other new features, it includes a major revamp in the JAAS module support:\nEncryption support Database Login Module Role Policies This post will use all 3 features, in order to create a secured Wicket application on Karaf, using Karaf’s JAAS modules and Wicket’s auth-roles module.\nIntroduction The application that we are going to build is a simple wicket application. It will be deployed on Karaf and the user credentials will be stored in a mysql database. For encrypting the password we will use Karaf’s Jasypt encryption service implementation, to encrypt passwords using MD5 algorithm in hexadecimal format.\nStep 1: Creating the database The database that we are going to create will the simplest possible. We need a table that will hold username and password for each user. Each user may have one or more roles, so we will need a new table to hold the roles of the users.\nWe are going to create a user named “iocanel“, that will have the roles “manager” and “admin” and password “koala” (stored in MD5 with hex output).\nNote, for cases that a schema for user credentials already exists, Karaf’s database login module offer’s customization by allowing the user to provide custom queries for password and role retrieval.\nStep 2: Creating a data source In order to create a data source we will use the blueprint to create a DataSource as an OSGi service. Before we do that we will need to install the mysql bundle and its prerequisite. They can be easily installed from karaf shell.\n1 2 osgi:install wrap:mvn:javax.xml.stream/stax-api/1.0 osgi:install wrap:mvn:mysql/mysql-connector-java/5.1.13 Once all prerequisites are meet the datasource can be created by dropping the following xml under karaf deploy folder or by adding it under OSGI-INF/blueprint folder of our bundle.\nStep 3: Creating a JAAS realm In the same manner the new JAAS realm can be created by dropping the blueprint xml under the deploy folder or by adding it under OSGI-INF/blueprint folder of our bundle.\nThe new realm will make use of Karaf’s JDBCLoginModule, and will also use MD5 encryption with hexadecimal output. Finally, it will be passed a role policy, that will add the “ROLE_” prefix on all role principals. This way our application can identify the role principals, without depending to the Karaf implementation.\nIf this isn’t that clear, note that JAAS specifies interface Principal and its implementations provide User \u0026amp; Role principals (as implementing classes), making it impossible to distinguish between these two without having a dependency to the JAAS implementation or by having a common convention. This is what Role Policies is about.\nStep 4: Creating a wicket application Everything is set and all we need is to create the wicket application that will make use of our new JAAS realm in order to authenticate.\nThe first step is to create a Wicket Authenticated Session:\nNow we need to tell our application to create such sessions and also where the location of our sign in page will be. For this purpose we will extend Wicket’s AuthenticatedWebApplication class: Now that everything is set up, we can restrict access to the HomePage to “admins” and “managers” by making use of Wickets\nFinal Words I hope you found it useful. The source of this example will be added to this post soon, so stay tuned.\n","date":1285362e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"7e3ff6cea8b342cd00f020cea754559d","permalink":"https://iocanel.com/2010/09/karaf-jaas-modules-in-action/","publishdate":"2010-09-25T00:00:00+03:00","relpermalink":"/2010/09/karaf-jaas-modules-in-action/","section":"post","summary":"Prologue Karaf 2.1.0 has been just released! Among other new features, it includes a major revamp in the JAAS module support:\nEncryption support Database Login Module Role Policies This post will use all 3 features, in order to create a secured Wicket application on Karaf, using Karaf’s JAAS modules and Wicket’s auth-roles module.\nIntroduction The application that we are going to build is a simple wicket application. It will be deployed on Karaf and the user credentials will be stored in a mysql database. For encrypting the password we will use Karaf’s Jasypt encryption service implementation, to encrypt passwords using MD5 algorithm in hexadecimal format.\n","tags":["java","osgi","jaas","security"],"title":"Karaf JAAS modules in action","type":"post"},{"authors":null,"categories":["personal"],"content":"1 week after my vacation and still suffering from “post vacation depression”, this Monday seemed like a nightmare.\nI went to the office and I was feeling the urge to go get my self a huge Carafe of coffee (cups have long been proven inefficient), when an icoming email draw my attention.\nIt was an invitation to join Apache Karaf team as a committer.\nThis is the first open source project I join and I’m very thrilled (if not overreacting) about it and that’s why I decided to blog about it.\nI am looking forward working even more closely with this team.\nWell, it seems that Mondays aren’t that crappy after all!\n","date":1283115600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"9e44560fd10b53d6a539ca4721d1cd1b","permalink":"https://iocanel.com/2010/08/apache-karaf-committer/","publishdate":"2010-08-30T00:00:00+03:00","relpermalink":"/2010/08/apache-karaf-committer/","section":"post","summary":"1 week after my vacation and still suffering from “post vacation depression”, this Monday seemed like a nightmare.\nI went to the office and I was feeling the urge to go get my self a huge Carafe of coffee (cups have long been proven inefficient), when an icoming email draw my attention.\nIt was an invitation to join Apache Karaf team as a committer.\nThis is the first open source project I join and I’m very thrilled (if not overreacting) about it and that’s why I decided to blog about it.\n","tags":["java","osgi","karaf"],"title":"Apache Karaf committer","type":"post"},{"authors":null,"categories":["development"],"content":"EDIT: Hibernate is now OSGi ready so most of those stuff are now completely outdated.\nThe full source for this post has moved to github under my blog project on branch: wicket-spring-3-jpa2-hibernate-osgi-application-on-apache-karaf.\nPrologue Recently I attempted to modify an existing crud web application for OSGi deployment. During the process I encountered a lot of issues such as\nLack of OSGi bundles. Troubles wiring the tiers of the application together. Issues on the OSGi container configuration. Lack of detailed examples on the web. So, I decided to create such a guide \u0026amp; provide full source for a working example (A very simple person crud application).\nThe first part of this guide is Creating custom Hibernate 3.5 OSGi bundles. This part provides an example project (which includes the bundles source) that describes how to use the custom hibernate bundles in order to build a wicket, spring 3, hibernate 3.5 jpa 2 and deploy it to Karaf.\nAmong others it describes:\nHow to wire database and web tier using the OSGi blueprint. How to deploy web applications to Karaf 1.6.0. A small wicket crud application. Note: This demo application does not make use OSGi Enterprise Spec, since its an OSGi-fication of an existing application. The use of the spec will be a subject for future posts.\nEnjoy!\nEnvironment Preparation The OSGi run-time that will be used in this post is Felix/Karaf version 1.6.0. This section describes the required configuration for deploying web applications.\nOnce, karaf is downloaded and extracted, it can be started by typing\nbin/karaf from inside the karaf root folder.\nNow, we are going to install karaf webconsole and war deployer that will allow us to deploy web applications to karaf.\n1 2 features:install webconsole features:install war Note: In the background karaf fetches all the required bundles from maven repositories. You are going to need internet access for this. Moreover, if you are behind a proxy you will need to set up your jvm net.properties accordingly. Having the proxy configured in maven settings.xml is not enough.\nCustom Bundles Most of the bundles required for this project are available either in public maven repositories or inside Spring Enterprise Bundle Repository. However, hibernate 3.5.x which is one of the key dependencies for this project is not available as OSGi bundle (note: earlier version of hibernate can be found in Spring EBR). More details on OSGi-fying Hibernate 3.5.x in the previous part of the guide “Creating custom Hibernate 3.5 OSGi bundles“.\nCreating the application itself The actual demo application will be the simplest possible wicket crud for persons (a killer application that stores/delete/updates a persons first name and last name to the database).\nDatabase The create schema script of such application in mysql would look like this:\n1 2 3 4 5 6 CREATE TABLE person ( ID MEDIUMINT NOT NULL AUTO_INCREMENT, FIRST_NAME VARCHAR(40) NOT NULL, LAST_NAME VARCHAR(40) NOT NULL, PRIMARY KEY (ID) ); Database Tier For the database tier we are going to create a simple bundle that will contain the entity, the dao interface and the dao implementation. The bundle will contain the necessary persistence descriptor for JPA 2.0 with hibernate as persistence provider. Finally it will use spring to create the data source, entity manager factory \u0026amp; JPA transaction manager. This bundle will export the dao as a service to the OSGi Registry using Spring dynamic modules.\nThe Person entity for the example can look like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 package net.iocanel.database.entities; import java.io.Serializable; import javax.persistence.Basic; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.Table; /** * * @author iocanel */ @Entity @Table(name = \u0026#34;Person\u0026#34;) @NamedQueries({ @NamedQuery(name = \u0026#34;Person.findAll\u0026#34;, query = \u0026#34;SELECT p FROM Person p\u0026#34;), @NamedQuery(name = \u0026#34;Person.findById\u0026#34;, query = \u0026#34;SELECT p FROM Person p WHERE p.id = :id\u0026#34;), @NamedQuery(name = \u0026#34;Person.findByFirstName\u0026#34;, query = \u0026#34;SELECT p FROM Person p WHERE p.firstName = :firstName\u0026#34;), @NamedQuery(name = \u0026#34;Person.findByLastName\u0026#34;, query = \u0026#34;SELECT p FROM Person p WHERE p.lastName = :lastName\u0026#34;)}) public class Person implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) @Basic(optional = false) @Column(name = \u0026#34;ID\u0026#34;) private Integer id; @Column(name = \u0026#34;FIRST_NAME\u0026#34;) private String firstName; @Column(name = \u0026#34;LAST_NAME\u0026#34;) private String lastName; public Person() { } public Person(Integer id) { this.id = id; …","date":1278882e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"0a3bf10fb529ba66136c352a3b54ea08","permalink":"https://iocanel.com/2010/07/wicket-with-spring-3-and-hibernate-on-apache-karaf/","publishdate":"2010-07-12T00:00:00+03:00","relpermalink":"/2010/07/wicket-with-spring-3-and-hibernate-on-apache-karaf/","section":"post","summary":"EDIT: Hibernate is now OSGi ready so most of those stuff are now completely outdated.\nThe full source for this post has moved to github under my blog project on branch: wicket-spring-3-jpa2-hibernate-osgi-application-on-apache-karaf.\nPrologue Recently I attempted to modify an existing crud web application for OSGi deployment. During the process I encountered a lot of issues such as\nLack of OSGi bundles. Troubles wiring the tiers of the application together. Issues on the OSGi container configuration. Lack of detailed examples on the web. So, I decided to create such a guide \u0026 provide full source for a working example (A very simple person crud application).\n","tags":["java","osgi","karaf","wicket","spring"],"title":"Wicket with Spring 3 and Hibernate on Apache Karaf","type":"post"},{"authors":null,"categories":["development"],"content":"EDIT: I am more than happy that this post is now completely obsolete. Hibernate is now OSGi ready, Yay!\nPrologue I was trying to migrate an application that uses JPA 2.0 / Hibernate to OSGi. I found out that hibernate does not provide OSGi bundles. There are some Hibernate bundles provided in the Spring Enterprise Bundle repository, however they are none available for Hibernate 3.5.x which implements JPA 2.0. So I decided to create them myself and share the experience with you.\nThis post describes how to OSGi-fy Hibernate 3.5.2-Final with EhCache and JTA transaction support. The bundles that were created were tested on Felix Karaf, but they will probably work in other containers too.\nIntroduction A typical JPA 2.0 application with Hibernate as persistence provider will probably require among other the following dependencies\nhibernate-core hibernate-annotations hibernate-entitymanager hibernate-validator ehcache Unfortunately, at the time this post was written none of the above was available as OSGi bundle. To make OSGi bundles for the above one needs to overcome the following problems\nCyclic dependencies inside Hibernate artifacts. 3rd party dependencies (e.g. Weblogic/Websphere Transaction Manager). Common api / impl issues for validation api and hibernate cache. The last bullet which may not be that clear points to a problem where an api loads classes from the implementation using Class.forName() or similar approaches. In the OSGi world that means that the api must import packages from the implementation.\nHibernate cyclic dependencies The creation of an OSGi bundle for each hibernate artifact is possible. However, when the bundles get deployed to an OSGi container, they will fail to resolve due to cyclic package imports.\nThe easiest way to overcome this issue is to merge hibernate core artifacts into one bundle. Below I am going to provide an example of how to use maven bundle plug-in to merge hibernate-core, hibernate-annotations \u0026amp; hibernate-entitymanager into one bundle.\nA common way to use the maven-bundle-plugin to merge jars into artifacts is to instruct it to embed the dependencies of a project into a bundle. However, this is not very handy in cases where you need to add custom code into the final bundle. In that case you can use the maven dependency plug-in to unpack the dependencies, bundle plug-in to create the manifest and jar plug-in to instruct it to use the generated manifest in the package phase.\nHibernate and 3rd party dependencies Hibernate has a lot of 3rd party dependencies. Some of them are available as OSGi bundles, some need to be created and some can be excluded.\nExamples of 3rd party dependencies that are available as OSGi bundle in the Spring Enterprise Repository are:\nantlr dom4j cglib Dependencies that are not available are:\njacc (javax.security.jacc) Dependencies that can be excluded vary depending on the needs. In my case I could exclude Weblogic/Websphere transaction manager, since I didn’t intend to use them. To exclude a dependency just add the packages that are to be excluded in the import packages section using the ! operator (e.g. !com.ibm.,) Hibernate validator and validation API As mentioned above the validation api provides a factory that build the validator by loading the implementing class using Class.forName(). This issue can be solved with 2 ways\nUse dynamic imports in the API bundle to import the Implementation at runtime. Make the implementation OSGi Fragment that will get attached to the API. In this example the validation api is the one provided by the Spring Enterprise Bundle Repository, so the second approach was easier to apply.\nMore details on this issue can be found at this excellent blog post: Having “fun” with JSR-303 Beans Validation and OSGi + Spring DM\nHibernate and EhCache More or less the same applies to EhCache. Hibernate provides an interface which is implemented by EhCache. Hibernate loads that implementation in runtime. We will do exactly the same thing we did for hibernate validator. We will convert ehcache jar to fragement bundle so that it gets attached to the merged hibernate bundle.\nHibernate and JTA Transactions I kept for last the most interesting part. This part describes what needs to be added inside the bundle so that it can support JTA transactions.\nFor JTA transactions Hibernate needs a reference to the transaction manager. That reference is returned by the TransactionManagerLookup class specified in the persistence.xml. In a typical JEE container the lookup class just performs a JNDI to get the TransactionManager. In an OSGi container the transaction manager is very likely to be exported as an OSGi service.\nThis section describes how to build an OSGi based TransactionManagerLookup class. The solution presented is very simple and uses only the OSGi core framework (no blueprint implementation required).\nWe will add to the hibernate bundle 3 new classes:\nTransactionManagerLocator (Service Locator). OsgiTransactionManagerLookup (Lookup …","date":1278709200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"670589ec91f6297aa969d4878bf88096","permalink":"https://iocanel.com/2010/07/creating-custom-hibernate-osgi-bundles-for-jpa-2.0/","publishdate":"2010-07-10T00:00:00+03:00","relpermalink":"/2010/07/creating-custom-hibernate-osgi-bundles-for-jpa-2.0/","section":"post","summary":"EDIT: I am more than happy that this post is now completely obsolete. Hibernate is now OSGi ready, Yay!\nPrologue I was trying to migrate an application that uses JPA 2.0 / Hibernate to OSGi. I found out that hibernate does not provide OSGi bundles. There are some Hibernate bundles provided in the Spring Enterprise Bundle repository, however they are none available for Hibernate 3.5.x which implements JPA 2.0. So I decided to create them myself and share the experience with you.\n","tags":["java","osgi","hibernate"],"title":"Creating custom Hibernate OSGI bundles for JPA 2.0","type":"post"},{"authors":null,"categories":["hints"],"content":"Prologue This post intents to point out some pitfalls when using spring aop and reflection on the same objects. Moreover, it provides some examples of these pitfalls when combining ServiceMix \u0026amp; Camel with Spring JPA/Hibernate.\nThe two most common uses of aspect oriented programming with spring are:\nSecurity Transation Handling I found myself having issues when applying those 2 on beans that are accessed using reflection (not in all cases) and below I am going to dig into those issues.\nSpring AOP flavors Spring aop can be used in many different flavors:\nCompile time weaving Load time weaving Using dynamic or cglib proxies (The main focus of this post) Cglib Proxies and reflection There are many cases where a bean needs to be accessed using reflection. A common case is to use reflection in order to access a private field. I could use the following piece of code in order to retrieve the privateProperty value of SomeBean using Reflection like this: And this would work pretty cool. However, if the someBeanInstance is enhanced using cglib, the code above would break resulting in having a null value in privatePropertyValue.\nSpring’s Transactional annotation and reflection The problem as described above might be pretty obvious, however here is a direct side effect of it that is not that obvious. Let’s assume the use of Spring’s transactional annotation. A possible set up could be the bean annotated as transactional could be If the resource is injected using traditional reflection(as described above), this would eventually result in a Null Pointer Exception, due to the fact that the resource would fail to be injected. Moreover, the Exception would trigger a transaction rollback and the entity would not be saved.\nYou might wonder “why would reflection fail?”. The answer is that the cglib proxy is actually a subclass of the proxied object that is created on run-time and thus reflection would fail to find the declared field on the proxy. In order to to make it work, the getDeclared needs to be called upon the super class (but it would break once you removed the aop).\nWorkaround: Spring’s ReflectionUtils to the rescue Spring provides a class that among others offers a work around for this issue. Here is an example of using Spring’s ReflectionUtils on cglib proxies. Behind the scenes spring will attempt to find the declared field both on the target object(someBean) and all its superclasses. So if someBean is proxied, it will fail to find the declared field on the proxy, but it will succeed using its superclass (SomeBean.class).\nReal examples using Apache ServiceMix and Camel I first encountered the issue the first time I attempted to add the transactional annotation on the bean of a ServiceMix’s BeanEndpoint. A simplified version of this case is here. ServiceMix uses reflection (as described above) in order to inject the DeliveryChannel to the MessageExchangeLister and this reproduces the problem. Unfortunately, a direct solution to this issue would require editing the BeanEndpoint itself (which is not such a bad idea). An other work around would be to use the transactional annotation using compile time weaving. Finally, if none of the above seems appealing, you can always create an other bean that will be annotated as transactional an make calls to that bean from inside the MessageExchangeLister.\nNote: The bean endpoint itself uses Spring’s ReflectionUtils and it shouldn’t encounter this issue, however it still does, due to the fact that the property (in this case the DeliveryChannel) is set on the proxy and not the actual object.\nA similar case I encountered was the use of Camel’s @RecipientList annotation combined with Spring’s @Transactional annotation. I will not get into details about it, since I think that by now its pretty obvious.\nFinal thoughts If you get to understand the nature if this issue its not that hard to deal with it. However, I spent a great deal of time trying to identify the root cause.\nIn most cases you can bypass it by avoiding to proxy the reflection target itself. To do so you only pass a reference of the proxied object to the class that is access using reflection.\nFrom what I read in the forums, it taunts a lot of people and this is why I decided to blog about it.\nI hope you find it useful!\n","date":1275685200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"eb26b82f05a2700356dd2a2b7405ab07","permalink":"https://iocanel.com/2010/06/spring-aop-and-refleciton-pitfalls/","publishdate":"2010-06-05T00:00:00+03:00","relpermalink":"/2010/06/spring-aop-and-refleciton-pitfalls/","section":"post","summary":"Prologue This post intents to point out some pitfalls when using spring aop and reflection on the same objects. Moreover, it provides some examples of these pitfalls when combining ServiceMix \u0026 Camel with Spring JPA/Hibernate.\nThe two most common uses of aspect oriented programming with spring are:\nSecurity Transation Handling I found myself having issues when applying those 2 on beans that are accessed using reflection (not in all cases) and below I am going to dig into those issues.\n","tags":["spring","aop"],"title":"Spring AOP and Refleciton pitfalls","type":"post"},{"authors":null,"categories":["hints"],"content":"Prologue This is the first from a series of posts that demonstrate how to extend ServiceMix management using Spring’s jmx and aop features. The version of SerivceMix that is going to be used will be 3.3.3-SNAPSHOT but I’ve been using this technique since 3.3 and it will probably can be applied to 4.x.\nProblem One of the most common problems I had with servicemix was that even the most simple changes in the configuration (e.g. changing the name of the destination in a jms endpoint) required editing the xbean.xml of the service unit and redeployment. Moreover this affected the rest of the service units contained in the service assemblies, which would be restarted too.\nAn other common problem was that I could not start, stop and restart a single service unit. That was a major problem since I often needed to be able to stop sending messages, while being able to accept messages in the bus. The only option I had was to split our service units in multiple service units (e.g. inbound service unit and outbound service unit).\nSolution This series of blog post will demonstrate how we used spring in order to:\nObtain service unit lifecycle management via jmx. Expose endpoint and marshaler configuration via jmx. Perform configuration changes on live production environments. Persisting these changes to database. Loading endpoint custom configuration from database. Part 1: Starting and Stoping Endpoints Although all ServiceMix endpoint have start and stop methods these methods are not expose neither to jmx nor to the web console. A very simple but usefull way to expose this methods to jmx is to use spring’s jmx auto exporting capabilities.\nExample As an example I will use wsdl-first project from servicemix samples in order to expose the lifecycle methods of the http endpoint to jmx. To do so I will delegate its lifecycle methods (start,stop) to a spring bean that is annotated with the @ManagedResource annotation and I will modify the xbean.xml of the http service unit so that it automatically exports to jmx beans annotated as @ManagedResources.\nStep 1 The first step is to add spring as a provided dependency inside the http service unit.\nStep 2 Create the class that will be exported to jmx by spring. I will name the class HttpEndpointManager. This class will be annotated as @ManagedResource, will have a property of type HttpEndpoint and will delegate to HttpEndpoints the lifecycle methods (activate,deactivate,start,stop). This methods will be exposed to jmx by being annotated as @ManagedOperation.\nStep 3 Edit the xbean.xml of the http service unit and add the spring beans that will take care of automatically exporting the HttpEndpointManager to jmx.\nEnjoy You can now open the JConsole and use the HttpEndpointManager MBean to start/stop the HttpEndpoint without having to start/stop the whole service assembly.\nNotes Managing the lifecycle of endpoints in a simple assembly like the wsdl-first servicemix sample has no added value(since you can stop the service assembly). However this sample was chosen, since most servicemix users are pretty familiar with it. In more complex assemblies this trick is a savior(cause you can stop a single endpoint, while having the rest of the endpoints running). Moreover, this is the base for even more usefull tricks that will be presented in the parts that follow.\nThe full source code of this example can be found here.\nIn the second part of the series, I will demonstrate how you can extend this trick in order to perform configuration modification via jmx.\n","date":1273870800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"426f184ec61de0e9f781a247e989c2a1","permalink":"https://iocanel.com/2010/05/extending-servicemix-management-features-using-spring-part-1/","publishdate":"2010-05-15T00:00:00+03:00","relpermalink":"/2010/05/extending-servicemix-management-features-using-spring-part-1/","section":"post","summary":"Prologue This is the first from a series of posts that demonstrate how to extend ServiceMix management using Spring’s jmx and aop features. The version of SerivceMix that is going to be used will be 3.3.3-SNAPSHOT but I’ve been using this technique since 3.3 and it will probably can be applied to 4.x.\nProblem One of the most common problems I had with servicemix was that even the most simple changes in the configuration (e.g. changing the name of the destination in a jms endpoint) required editing the xbean.xml of the service unit and redeployment. Moreover this affected the rest of the service units contained in the service assemblies, which would be restarted too.\n","tags":["java","servicemix","spring"],"title":"Extending ServiceMix management features using Spring - Part 1","type":"post"},{"authors":null,"categories":["hints"],"content":"In the previous post Extend ServiceMix Management features using Spring – Part 1 I demonstrated a very simple technique that allows you to expose endpoint lifecycle operations via jmx. Now I am going to take it one step further and expose the endpoint configuration via jmx.\nIf you haven’t done already please catch up by reading Part 1.\nPart II: Modifying the configuration of a live endpoint I am going to use the wsdl-first servicemix sample as modified in the previous post and expose the property locationURI of the HttpEndpoint to jmx using Spring’s @ManagedAttribute annotation.\nStep 1 Open the HttpEndpointManager and delegate the getter and setter of HttpEndpoints locationURI property.\nStep 2 Annotate both methods with @ManagedAttribute\nEnjoy Once the assembly gets deployed from the jmx console the locationURI property is exposed. Note that once the new property is applied, the endpoint needs to be reactivated (call deactivate and activate from jmx as shown in the previous post).\nAs you can see in the picture I used jmx and changed the location uri from PersonService to NewPersonService, without editing, recompiling or redeploying the service assembly.\nThis approach is really simple and quite useful. Its biggest advantage is that even a person that has no knowledge of ServiceMix can alter the configuration. Moreover it simplifies the monitoring procedure of production environments. The full source code of this example can be found here.\nIn the Part 3 I will demonstrate how these changes in the configuration can be persisted and how we can intercept endpoints lifecycle so that we have those changes loaded each time the endpoint starts.\n","date":1273870800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"5f6d6e5c200d756ac22ee28bd9d86b76","permalink":"https://iocanel.com/2010/05/extending-servicemix-management-features-using-spring-part-2/","publishdate":"2010-05-15T00:00:00+03:00","relpermalink":"/2010/05/extending-servicemix-management-features-using-spring-part-2/","section":"post","summary":"In the previous post Extend ServiceMix Management features using Spring – Part 1 I demonstrated a very simple technique that allows you to expose endpoint lifecycle operations via jmx. Now I am going to take it one step further and expose the endpoint configuration via jmx.\nIf you haven’t done already please catch up by reading Part 1.\nPart II: Modifying the configuration of a live endpoint I am going to use the wsdl-first servicemix sample as modified in the previous post and expose the property locationURI of the HttpEndpoint to jmx using Spring’s @ManagedAttribute annotation.\n","tags":["java","servicemix","spring"],"title":"Extending ServiceMix management features using Spring - Part 2","type":"post"},{"authors":null,"categories":["hints"],"content":"In the previous post Extend ServiceMix Management features using Spring – Part 2 I demonstrated how to use spring to gain control over endpoint lifecycle and configuration via jmx. You might wonder till now “what happens to those custom changes if I have to redeploy the assembly, restart servicemix or even worse restart the server?”. The short answer is that these changes are lost. The long answer is in this blog post, which explains how to persist those changes and how to make the endpoint reload them each time it starts.\nPart III : Modifying, Persisting and Loading Custom Configuration Automatically In order to persist and auto load custom configuration upon endpoint start-up all we need are the following.\nFor persisting A way to serialize the configuration in xml (jaxb2). A way to persist the configuration (jpa/hibernate). For auto loading A way to intercept endpoint start and activate methods (spring aop). A way to apply that configuration to the endpoint (beanutils). The basic idea is that for each endpoint, the custom configuration can be serialized to xml and persisted and with the use of aop interceptors reloaded to the endpoint each time it starts up.\nStep 1: Configuring persistence For persisting configuration I am going to use JPA/Hibernate and MySQL. I want to keep things as simple as possible, so I will create a table that will only contains 2 fields\nID: the id of the endpoint which will be the primary key CONFIGURATION: A text field that will hold the configuration in xml format The endpoint id can be retrieved by calling endpoint.getKey(). The configuration is the XML representation of the configuration (more details later).\nThe persistence unit, the entity and the data access object are things that we want to be reusable so they better be in a separate jar. I will call this management-support.\nLet’s start creating the new jar by adding the entity.\nNow we can create the persistence unit. Note that in this example I am adding all the database connection information inside persistence.xml leaving pooling to hibernate. It would be better if I created a datasource, but for the shake of simplicity I will not.\nNow its time to create a very simple dao for the EndpointConfiguration entity.\nStep 2: Configuring configuration serialization For each endpoint type that we want its configuration to be serialized and persistence I am going to create a pojo that contains all the properties that are managed. The pojo will be annotated with Jaxb annotations so that we can easily serialize it to xml. Before serialization takes place the pojo needs to be set the values of the current configuration. For this purpose I am going to use BeanUtils (spring beanutils). Now we can update our endpoint manager and add 2 methods (save \u0026amp; load of configuration) and the ConfigurationDao that was presented above.\nThe new endpoint manager will expose to the jmx the saveConfiguration and loadConfiguration managed operation.\nStep 3: Configuring Endpoint lifecycle interception In this section I will show how to intercept the lifecycle methods of the endpoint using spring-aop. Spring aop will be configured using cglib proxies. The goal is to intercept start and activate methods call the method load configuration on the endpoint manager and then proceed with the execution. So the interceptor needs to be aware of the endpoint that intercepts(determined by the pointcut definition) and the endpoint manager(will be injected to the bean that will play the role of the Aspect). So the interceptor will look like this\nNote that we are intercepting both start and activate methods. This is because in some endpoints in order to refresh their configuration needs to be restarted while other need to be reactivated.\nStep 4: Putting the pieces together Now, its time to put all the pieces together. I am going to create a new jar the management support and add to it a generic endpoint manager(the base class for all entpoint managers), the endpoint configuration entity, the configuration dao and the persistance unit. The example project(wsdl-first) will be modified so that the HttpEndpointManager extends the generic endpoint manager and the http-su xbean.xml configures persistence and aop as explained above.\nThe generic EndpointManager The POJO that represents HttpEndpoint configuration The updated HttpEndpointManager\nAnd finally the xbean.xml for the http service unit The final configuration might seem a bit bloated. It can become a lot tidier by using xbean features, however this goes far beyound the scope of this post.\nPreparing the container For this example to run we need to add a few jars to servicemix\nhibernate-entitymanager hibernate-annotations aspectjrt spring-orm the dependencies of the above You can download the complete example here which will contains all the dependencies under wsdl-first/lib/optional.\nFinal words I hope that you find it useful. Personally, I’ve been using it for quite some time now and I am very happy with it. Using this …","date":1273870800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"07e1df072b398de18fa8ee0b6f817c8b","permalink":"https://iocanel.com/2010/05/extending-servicemix-management-features-using-spring-part-3/","publishdate":"2010-05-15T00:00:00+03:00","relpermalink":"/2010/05/extending-servicemix-management-features-using-spring-part-3/","section":"post","summary":"In the previous post Extend ServiceMix Management features using Spring – Part 2 I demonstrated how to use spring to gain control over endpoint lifecycle and configuration via jmx. You might wonder till now “what happens to those custom changes if I have to redeploy the assembly, restart servicemix or even worse restart the server?”. The short answer is that these changes are lost. The long answer is in this blog post, which explains how to persist those changes and how to make the endpoint reload them each time it starts.\n","tags":["java","servicemix","spring"],"title":"Extending ServiceMix management features using Spring - Part 3","type":"post"},{"authors":null,"categories":["Hints"],"content":"Prologue\nThis is the first from a series of posts that demonstrate how to extend ServiceMix management using Spring’s jmx and aop features. The version of SerivceMix that is going to be used will be 3.3.3-SNAPSHOT but I’ve been using this technique since 3.3 and it will probably can be applied to 4.x.\nProblem\nOne of the most common problems I had with servicemix was that even the most simple changes in the configuration _(e.g. changing the name of the destination in a jms endpoint) _required editing the xbean.xml of the service unit and redeployment. Moreover this affected the rest of the service units contained in the service assemblies, which would be restarted too.\nAn other common problem was that I could not start, stop and restart a single service unit. That was a major problem since I often needed to be able to stop sending messages, while being able to accept messages in the bus. The only option I had was to split our service units in multiple service units (e.g. inbound service unit and outbound service unit).\nSolution\nThis series of blog post will demonstrate how we used spring in order to:\nObtain service unit lifecycle management via jmx. Expose endpoint and marshaler configuration via jmx. Perform configuration changes on live production environements. Persisting these changes to database. Loading endpoint custom configuration from database. Part I: Starting and Stoping Endpoints Although all ServiceMix endpoint have start and stop methods these methods are not expose neither to jmx nor to the web console. A very simple but usefull way to expose this methods to jmx is to use spring’s jmx auto exporting capabilities. Example: As an example I will use wsdl-first project from servicemix samples in order to expose the lifecycle methods of the http endpoint to jmx. To do so I will delegate its lifecycle methods (start,stop) to a spring bean that is annotated with the @ManagedResource annotation and I will modify the xbean.xml of the http service unit so that it automatically exports to jmx beans annotated as @ManagedResources.\nStep 1\nThe first step is to add spring as a provided dependency inside the http service unit.\nStep 2\nCreate the class that will be exported to jmx by spring. I will name the class HttpEndpointManager. This class will be annotated as @ManagedResource, will have a property of type HttpEndpoint and will delegate to HttpEndpoints the lifecycle methods (activate,deactivate,start,stop). This methods will be exposed to jmx by being annotated as @ManagedOperation.\nStep 3\nEdit the xbean.xml of the http service unit and add the spring beans that will take care of automatically exporting the HttpEndpointManager to jmx. Enjoy\nYou can now open the JConsole and use the HttpEndpointManager MBean to start/stop the HttpEndpoint without having to start/stop the whole service assembly. Notes\nManaging the lifecycle of endpoints in a simple assembly like the wsdl-first servicemix sample has no added value(since you can stop the service assembly). However this sample was chosen, since most servicemix users are pretty familiar with it. In more complex assemblies this trick is a savior(cause you can stop a single endpoint, while having the rest of the endpoints running). Moreover, this is the base for even more usefull tricks that will be presented in the parts that follow. The full source code of this example can be found here. In the second part of the series, I will demonstrate how you can extend this trick in order to perform configuration modification via jmx. ","date":1273770840,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1778099604,"objectID":"445b938a92c0568e1801dbed95f2386a","permalink":"https://iocanel.com/2010/05/extend-servicemix-management-features-using-spring-part-1/","publishdate":"2010-05-13T17:14:00Z","relpermalink":"/2010/05/extend-servicemix-management-features-using-spring-part-1/","section":"post","summary":"Prologue\nThis is the first from a series of posts that demonstrate how to extend ServiceMix management using Spring’s jmx and aop features. The version of SerivceMix that is going to be used will be 3.3.3-SNAPSHOT but I’ve been using this technique since 3.3 and it will probably can be applied to 4.x.\nProblem\nOne of the most common problems I had with servicemix was that even the most simple changes in the configuration _(e.g. changing the name of the destination in a jms endpoint) _required editing the xbean.xml of the service unit and redeployment. Moreover this affected the rest of the service units contained in the service assemblies, which would be restarted too.\n","tags":["java","servicemix","spring"],"title":"Extend ServiceMix Management features using Spring – Part 1","type":"post"}]