<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Shubham&apos;s Blog</title><description>Welcome to Shubham&apos;s corner on the internet. I write about things I learn and things I find interesting.</description><link>https://blog-schwiftycold.firebaseapp.com/</link><item><title>Migrating blogs to Hugo</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2025-03-12-migration-to-hugo/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2025-03-12-migration-to-hugo/</guid><description>Migrating my blogs to Hugo</description><pubDate>Wed, 12 Mar 2025 11:34:03 GMT</pubDate><content:encoded>&lt;p&gt;This is just a notification that my new blogs will be posted on the new site.
The site URL will still remain the same - blog.shubham.codes&lt;/p&gt;
</content:encoded></item><item><title>Using Jmespath in Emacs</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2024-02-06-jmespath-emacs-library/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2024-02-06-jmespath-emacs-library/</guid><description>Querying JSON files/data using Jmespath library in Emacs</description><pubDate>Mon, 05 Feb 2024 18:30:00 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When you have a small JSON file, it is quite easy to look for what you want.
But querying a large JSON data is very troublesome.&lt;/p&gt;
&lt;p&gt;This is where people use tools like Jmespath to filter and transform data into their liking.
This post is about a small wrapper over &lt;code&gt;jp&lt;/code&gt; CLI utility that you can use while working on JSON files in Emacs.&lt;/p&gt;
&lt;h2&gt;Installing JP&lt;/h2&gt;
&lt;h3&gt;Linux&lt;/h3&gt;
&lt;p&gt;On Linux, you can install the utility by first downloding the binary and then installing it.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; sudo wget https://github.com/jmespath/jp/releases/latest/download/jp-linux-amd64 \
&amp;gt;   -O /usr/local/bin/jp  &amp;amp;&amp;amp; sudo chmod +x /usr/local/bin/jp
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Mac&lt;/h3&gt;
&lt;p&gt;On Mac, you can install Jmespath CLI using &lt;code&gt;brew&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; brew install jmespath/jmespath/jp
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Adding Jmespath recipe in Emacs&lt;/h2&gt;
&lt;p&gt;This is my first recipe that I published on &lt;a href=&quot;https://melpa.org/&quot;&gt;MELPA&lt;/a&gt;.Here are the steps to install it in &lt;code&gt;Doom&lt;/code&gt; using &lt;code&gt;straight&lt;/code&gt;.
You can use any package manager to install it using &lt;code&gt;MELPA&lt;/code&gt; repository.&lt;/p&gt;
&lt;h3&gt;Doom Emacs&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;On Doom, you can just mention the below recipe in your &lt;code&gt;package.el&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; (package! jmespath)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Also, add the below line in your &lt;code&gt;config.el&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; (use-package jmespath)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using Jmespath&lt;/h2&gt;
&lt;p&gt;There is an interactive function, &lt;code&gt;jmespath-query-and-show&lt;/code&gt; that you can use to query the currently opened buffer or a file.
To use it with current buffer, you can simply call this function and enter your query.
The output will be shown on a new buffer with name &lt;strong&gt;JMESPath Result&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To use this with a different file, you can set the &lt;code&gt;Universal Argument&lt;/code&gt; using &lt;code&gt;C-u&lt;/code&gt; or &lt;code&gt;SPC-u&lt;/code&gt; (using evil).
Then you can enter the file to execute the query on.&lt;/p&gt;
&lt;h2&gt;Read more&lt;/h2&gt;
&lt;p&gt;To learn more about JMESPath, visit &lt;a href=&quot;https://jmespath.org/&quot;&gt;Jamespath page&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Running PMML models in Erlang using NIF and C++</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-10-15-pmml-library-erlang-nif/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-10-15-pmml-library-erlang-nif/</guid><description>Here, we will use NIF to call the C++ functions on Erlang side and use the cPMML library to parse the PMML files and run prediction on a linear regression model (can be used for any AI/ML model).</description><pubDate>Sun, 15 Oct 2023 05:10:32 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Erlang is a great language for building concurrent systems that are fault-tolerant and scalable.
But it lacks some of the libraries that are available in other languages.
One such example is using &lt;code&gt;PMML&lt;/code&gt; files for machine learning models.
At the time of writing, Erlang doesn&apos;t have a library for parsing &lt;code&gt;PMML&lt;/code&gt; files.
This is a problem for people who want to use Erlang for building machine learning systems.
Here I&apos;ll show how to use &lt;code&gt;C++&lt;/code&gt; to build a &lt;code&gt;NIF&lt;/code&gt; that can be used in Erlang to parse &lt;code&gt;PMML&lt;/code&gt; files.
More specifically, I&apos;ll use the &lt;a href=&quot;https://github.com/AmadeusITGroup/cPMML&quot;&gt;cPMML&lt;/a&gt; library to build a &lt;code&gt;NIF&lt;/code&gt; that can be used in Erlang.&lt;/p&gt;
&lt;h2&gt;Erlang NIF&lt;/h2&gt;
&lt;p&gt;Erlang NIF provides you a way to define your functions in &lt;code&gt;C/C++&lt;/code&gt; and call them in &lt;code&gt;Erlang&lt;/code&gt; code natively.
The &lt;code&gt;C/C++&lt;/code&gt; program is compiled to generate a library file that can be used in Erlang.
This library is dynamically linked to the Erlang VM and is the fastest way of calling the &lt;code&gt;C/C++&lt;/code&gt; code from &lt;code&gt;Erlang&lt;/code&gt;.
The disadvantage of the approach is that if the &lt;code&gt;C/C++&lt;/code&gt; code crashes, it will crash the Erlang VM.
And you will need to maintain the &lt;code&gt;C/C++&lt;/code&gt; code along with the &lt;code&gt;Erlang&lt;/code&gt; code.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;cPMML&lt;/code&gt; library requires a version of &lt;code&gt;C++&lt;/code&gt; that supports &lt;code&gt;C++11&lt;/code&gt; standard.
Also, make sure you have the required header files on the &lt;code&gt;Erlang&lt;/code&gt; side.&lt;/p&gt;
&lt;p&gt;You can find them on Mac OS or Linux using the find function.
If the header files are located in multiple locations, use the &lt;code&gt;Celler&lt;/code&gt; one.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;find / -name erl_nif.h | grep erl_nif.h
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Hello Nif program&lt;/h2&gt;
&lt;h3&gt;C/C++ code&lt;/h3&gt;
&lt;h4&gt;Header file&lt;/h4&gt;
&lt;p&gt;You need to include the below header file to use the &lt;code&gt;Erlang&lt;/code&gt; functionalities.
On a lower level, it defines the data structures and environment that &lt;code&gt;Erlang&lt;/code&gt; provides to the &lt;code&gt;C/C++&lt;/code&gt; code.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#include &amp;lt;erl_nif.h&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Function Definition&lt;/h4&gt;
&lt;p&gt;These functions are called from &lt;code&gt;Erlang&lt;/code&gt; code.
They must follow a specific structure and return a specific type of value.
Here, we will define a function that will return a &quot;Hello, World!&quot; string.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ERL_NIF_TERM&lt;/code&gt; is the return type of the function.
It is an interface for various return types like &lt;code&gt;binary&lt;/code&gt;, &lt;code&gt;tuple&lt;/code&gt;, &lt;code&gt;list&lt;/code&gt;, &lt;code&gt;atom&lt;/code&gt;, etc.
Meaning, that you can return any of these types from the function.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ErlNifEnv&lt;/code&gt; is the pointer to the Erlang environment.
It provides you access to various &lt;code&gt;Erlang&lt;/code&gt; functionalities like memory management, exception handling and &lt;code&gt;Erlang&lt;/code&gt; term creation.
For us, it will help in creating Erlang Terms like &lt;code&gt;string&lt;/code&gt;.
&lt;code&gt;enif_make_string&lt;/code&gt; is the function that will create a string from the &lt;code&gt;C/C++&lt;/code&gt; code.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;argc&lt;/code&gt; and &lt;code&gt;argv&lt;/code&gt; provide the number of arguments and the arguments passed to the function from &lt;code&gt;Erlang&lt;/code&gt; code.&lt;/p&gt;
&lt;p&gt;Below is the function definition for &lt;code&gt;hello world&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;static ERL_NIF_TERM hello_world(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) {
    return enif_make_string(env, &quot;Hello, World!&quot;, ERL_NIF_LATIN1);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Export functions&lt;/h4&gt;
&lt;p&gt;You need to specify the functions that you want to export to the &lt;code&gt;Erlang&lt;/code&gt; code.
The structure is a list of &lt;code&gt;ErlNifFunc&lt;/code&gt; objects.
Each object has the name of the function that Erlang sees, the number of arguments and the function pointer.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;static ErlNifFunc nif_funcs[] = {
    {&quot;hello_world&quot;, 0, hello_world}
};

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To initialize the &lt;code&gt;NIF&lt;/code&gt; library, you need to call the &lt;code&gt;ERL_NIF_INIT&lt;/code&gt; function.
This takes in the name of your &lt;code&gt;Erlang&lt;/code&gt; module and the exported functions.&lt;/p&gt;
&lt;p&gt;Let&apos;s call our &lt;code&gt;Erlang&lt;/code&gt; module &lt;code&gt;hello_nif&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ERL_NIF_INIT(hello_nif, nif_funcs, NULL, NULL, NULL, NULL)
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Final code&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;extern &quot;C&quot; {
  #include &amp;lt;erl_nif.h&amp;gt;
  static ERL_NIF_TERM hello_world(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) {
      return enif_make_string(env, &quot;Hello, World!&quot;, ERL_NIF_LATIN1);
  }
  static ErlNifFunc nif_funcs[] = {
      {&quot;hello_world&quot;, 0, hello_world}
  };
}

ERL_NIF_INIT(hello_nif, nif_funcs, NULL, NULL, NULL, NULL);
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Erlang code&lt;/h3&gt;
&lt;h4&gt;Erlang module&lt;/h4&gt;
&lt;p&gt;We will name our &lt;code&gt;Erlang&lt;/code&gt; module &lt;code&gt;hello_nif.erl&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;-module(hello_nif).
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Define your NIF functions&lt;/h4&gt;
&lt;p&gt;This specifies the functions that are exported from the &lt;code&gt;C/C++&lt;/code&gt; code.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;-export([hello_world/0]).
-nifs([hello_world/0]).
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Load the library on module load&lt;/h4&gt;
&lt;p&gt;If you compiled your &lt;code&gt;C/C++&lt;/code&gt; code to a library named &lt;code&gt;hello_nif.so&lt;/code&gt;, you can load it using the &lt;code&gt;load_nif&lt;/code&gt; function.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;-on_load(init/0).

init() -&amp;gt;
    ok = erlang:load_nif(&quot;./hello_nif&quot;, 0).
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Fallback function&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;C/C++&lt;/code&gt; code can crash and you need to handle that in the &lt;code&gt;Erlang&lt;/code&gt; code.
These functions will run when the &lt;code&gt;C/C++&lt;/code&gt; code crashes.
You should name each of these functions with the same name in the &lt;code&gt;C/C++&lt;/code&gt; code.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;hello_world() -&amp;gt;
    exit(nif_library_not_loaded).
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Final code&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;-module(hello_nif).
-nifs([hello_world/0]).
-on_load(init/0).

init() -&amp;gt;
    ok = erlang:load_nif(&quot;./hello_nif&quot;, 0).

hello_world() -&amp;gt;
    exit(nif_library_not_loaded).
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Compiling and Running&lt;/h3&gt;
&lt;p&gt;To compile the &lt;code&gt;C/C++&lt;/code&gt; code you can use &lt;code&gt;gcc&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;Mac OS&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;gcc -o hello_nif.so hello.c -I /usr/local/lib/erlang/erts-13.2.2.2/include/ -bundle -bundle_loader /usr/local/lib/erlang/erts-13.2.2.2/bin/beam.smp
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Linux&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;gcc -o hello_nif.so hello.c -I /usr/lib/erlang/erts-13.2.2.2/include -shared -fpic
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On Erlng, you can call the &lt;code&gt;hello_world&lt;/code&gt; function.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; c(hello_nif).
&amp;gt; hello_nif:hello_world().
&quot;Hello, World!&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;cPMML NIF&lt;/h2&gt;
&lt;p&gt;We saw how to create a simple &lt;code&gt;NIF&lt;/code&gt; that returns a string.
Now, let&apos;s create a &lt;code&gt;NIF&lt;/code&gt; that can take input and run prediction using a &lt;code&gt;PMML&lt;/code&gt; model file.
For this, we will use the same model file we created in the &lt;a href=&quot;/blog/2023-10-02-cpmml-predictions&quot;&gt;previous post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We have a PMML model for linear regression, &lt;code&gt;y = 2x + 1&lt;/code&gt; and we want to predict the value of &lt;code&gt;y&lt;/code&gt; for a given value of &lt;code&gt;x&lt;/code&gt;.
Keeping it short, we will expose a function called, &lt;code&gt;predict&lt;/code&gt; that will take the input and return the output.&lt;/p&gt;
&lt;h3&gt;C/C++ code&lt;/h3&gt;
&lt;p&gt;Assuming you can compile &lt;code&gt;cPMML&lt;/code&gt; files as described in the &lt;a href=&quot;/blog/2023-10-02-cpmml-predictions&quot;&gt;previous post&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;Header files&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;#include &amp;lt;iostream&amp;gt;
#include &amp;lt;vector&amp;gt;
#include &amp;lt;string&amp;gt;
#include &amp;lt;map&amp;gt;
#include &quot;cPMML.h&quot;

using namespace std;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;PMML Parser class&lt;/h4&gt;
&lt;p&gt;We want to load the model once and use it for multiple predictions.
For this, let&apos;s create a class that will load the model and call predictions on it.
We will maintain only one instance of this class and use it for all the predictions.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class PmmlModelParser {
private:
  cpmml::Model model;

public:
    PmmlModelParser(const string&amp;amp; modelname) {
        model = cpmml::Model(modelname);
    }

    string predict(const unordered_map&amp;lt;string, string&amp;gt;&amp;amp; x_input) {
        return model.predict(x_input);
    }
};

// Global variable
PmmlModelParser *pmmlModelParser = nullptr;

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;NIF implementation&lt;/h4&gt;
&lt;p&gt;We will create two functions, &lt;code&gt;init&lt;/code&gt; and &lt;code&gt;predict&lt;/code&gt;.
The &lt;code&gt;init&lt;/code&gt; function will take the &lt;code&gt;PMML&lt;/code&gt; file as input and load it.
The &lt;code&gt;predict&lt;/code&gt; function will take the input and return the output.&lt;/p&gt;
&lt;p&gt;The below code describes the structure of the &lt;code&gt;NIF&lt;/code&gt; functions.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;extern &quot;C&quot; {
    #include &amp;lt;erl_nif.h&amp;gt;
    
    static ERL_NIF_TERM init(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) {}

    static ERL_NIF_TERM predict(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) {}

    static ErlNifFunc nif_funcs[] = {
        {&quot;init&quot;, 1, init},
        {&quot;evaluate&quot;, 1, predict}
    };
}

ERL_NIF_INIT(lr_model, nif_funcs, NULL, NULL, NULL, NULL)

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;init function&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;init&lt;/code&gt; function will take the &lt;code&gt;PMML&lt;/code&gt; file as input and load it.
We will use the &lt;code&gt;enif_inspect_binary&lt;/code&gt; function to get the &lt;code&gt;PMML&lt;/code&gt; file as a binary.
Then we will convert it to a string and pass it to the &lt;code&gt;PmmlModelParser&lt;/code&gt; class.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;static ERL_NIF_TERM init(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) {
    ErlNifBinary input_bin;
    if (!enif_inspect_binary(env, argv[0], &amp;amp;input_bin)) {
        return enif_make_badarg(env);
    }
    string input(reinterpret_cast&amp;lt;char*&amp;gt;(input_bin.data), input_bin.size);
    pmmlModelParser = new PmmlModelParser(input);
    return enif_make_atom(env, &quot;ok&quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;predict function&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;predict&lt;/code&gt; function will take the input and return the output.
The input will be a string and the output will be a string on the Erlang side.&lt;/p&gt;
&lt;p&gt;But on the &lt;code&gt;C/C++&lt;/code&gt; side, we will convert the input to a map and pass it to the &lt;code&gt;PmmlModelParser&lt;/code&gt; class because &lt;code&gt;cPMML&lt;/code&gt; library expects a map as input.
Don&apos;t get intimidated by the code below, it&apos;s just converting the &lt;code&gt;Erlang&lt;/code&gt; map to a &lt;code&gt;C++&lt;/code&gt; map by iterating over all the keys and value pairs.&lt;/p&gt;
&lt;p&gt;For prediction, we will need a map with the key as &lt;code&gt;X&lt;/code&gt; and value as the input.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I could have just used a binary input as X and converted it to a string on the &lt;code&gt;C/C++&lt;/code&gt; side. But, to make this more general, I am converting the &lt;code&gt;Erlang&lt;/code&gt; map to a &lt;code&gt;C++&lt;/code&gt; map. Now this can be used for any PMML file and not just a specific one.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code&gt;static ERL_NIF_TERM predict(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]) {
  unordered_map&amp;lt;std::string, std::string&amp;gt; cpp_map;

  if (!enif_is_map(env, argv[0]))
  {
    return enif_make_badarg(env);
  }

  ErlNifMapIterator iter;
  if (enif_map_iterator_create(env, argv[0], &amp;amp;iter, ERL_NIF_MAP_ITERATOR_FIRST))
  {
    do
    {
      ERL_NIF_TERM key, value;
      if (enif_map_iterator_get_pair(env, &amp;amp;iter, &amp;amp;key, &amp;amp;value))
      {
        ErlNifBinary key_bin, value_bin;
        if (enif_map_iterator_get_pair(env, &amp;amp;iter, &amp;amp;key, &amp;amp;value))
        {
          char key_str[64], value_str[64];
          if (enif_get_string(env, key, key_str, sizeof(key_str), ERL_NIF_LATIN1) &amp;amp;&amp;amp;
              enif_get_string(env, value, value_str, sizeof(value_str), ERL_NIF_LATIN1))
          {
            cpp_map[key_str] = value_str;
          }
        }
      }
    } while (enif_map_iterator_next(env, &amp;amp;iter));
    enif_map_iterator_destroy(env, &amp;amp;iter);
  }

  string ret = pmmlModelParser-&amp;gt;predict(cpp_map);
  return enif_make_string(env, ret.c_str(), ERL_NIF_LATIN1);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Erlang code&lt;/h3&gt;
&lt;p&gt;We will name our &lt;code&gt;Erlang&lt;/code&gt; module &lt;code&gt;lr_model.erl&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;-module(lr_model).
-export([init/1, evaluate/1]).
-nifs([init/1, evaluate/1]).
-on_load(init/0).

init() -&amp;gt;
    ok = erlang:load_nif(&quot;./lr_model&quot;, 0).

init(PmmlFile) -&amp;gt;
    exit(problem_loading_nif).

evaluate(Input) -&amp;gt;
    exit(problem_loading_nif).
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Compiling and Running&lt;/h3&gt;
&lt;p&gt;Compiling the &lt;code&gt;C/C++&lt;/code&gt; code will now take an extra parameter to include &lt;code&gt;cPMML&lt;/code&gt; library.
As the model is predicting the values for &lt;code&gt;y = 2x + 1&lt;/code&gt;, we should get &lt;code&gt;~3&lt;/code&gt; for &lt;code&gt;x = 1&lt;/code&gt; and &lt;code&gt;~1&lt;/code&gt; for &lt;code&gt;x = 0&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;g++ -std=c++11 library.cpp \
-o lr_model.so \
-lcPMML \
-I /usr/local/lib/erlang/erts-13.2.2.2/include/ \
-bundle \
-bundle_loader /usr/local/lib/erlang/erts-13.2.2.2/bin/beam.smp
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; c(lr_model).
&amp;gt; lr_model:init(&amp;lt;&amp;lt;&quot;./lr_model.pmml&quot;&amp;gt;&amp;gt;).
&amp;gt; lr_model:evaluate(#{&quot;X&quot;=&amp;gt;&quot;0&quot;}).
&quot;0.967265&quot;
&amp;gt; lr_model:evaluate(#{&quot;X&quot;=&amp;gt;&quot;1&quot;}).
&quot;3.007469&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog, we saw how to use &lt;code&gt;C/C++&lt;/code&gt; libraries in &lt;code&gt;Erlang&lt;/code&gt; code.
We used a PMML file of a linear regression model to predict the values of &lt;code&gt;y = 2x + 1&lt;/code&gt; for a given value of &lt;code&gt;x&lt;/code&gt; in &lt;code&gt;Erlang&lt;/code&gt; where the code was implemented in C++.
We saw how to use the &lt;code&gt;cPMML&lt;/code&gt; library to parse the &lt;code&gt;PMML&lt;/code&gt; file and use it for predictions.&lt;/p&gt;
</content:encoded></item><item><title>Running AI/ML predictions in CPP using cPMML library</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-10-02-cpmml-predictions/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-10-02-cpmml-predictions/</guid><description>This is a short blog on how to install cPMML library and run AI/ML predictions in CPP.</description><pubDate>Sun, 01 Oct 2023 23:39:47 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;PMML&lt;/code&gt; is a markup language to save your AI/ML model files
so that you can use them for predictions later on
(maybe during production).
&lt;code&gt;cPMML&lt;/code&gt; is a library created by the
&lt;a href=&quot;https://github.com/AmadeusITGroup/cPMML&quot;&gt;AmadeusITGroup&lt;/a&gt;
to parse and run predictions in C++.
In this blog, we will train a linear regression model in
&lt;code&gt;python&lt;/code&gt; and generate a &lt;code&gt;pmml&lt;/code&gt; file and then we will
run our predictions in &lt;code&gt;C++&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Creating a model file&lt;/h2&gt;
&lt;h3&gt;Dependencies&lt;/h3&gt;
&lt;p&gt;We will need &lt;code&gt;pandas&lt;/code&gt;, &lt;code&gt;numpy&lt;/code&gt;, &lt;code&gt;scikit-learn&lt;/code&gt; and &lt;code&gt;sklearn2pmml&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pip install pandas numpy scikit-learn sklearn2pmml
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Imports&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn2pmml import sklearn2pmml
from sklearn2pmml.pipeline import PMMLPipeline
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The model&lt;/h3&gt;
&lt;h4&gt;Dataset&lt;/h4&gt;
&lt;p&gt;For keeping things simple, let&apos;s train a linear regression model
to match the equation, &lt;code&gt;y = 2x + 1&lt;/code&gt;.
We can generate a random dataset for this equation.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;X = np.random.rand(100, 1)
Y = 2 * X + 1 + 0.1 * np.random.randn(100, 1)
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Test/Train data&lt;/h4&gt;
&lt;p&gt;Next, we&apos;ll divide the data into test and train datasets.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;df = pd.DataFrame({&apos;X&apos;: X.flatten(), &apos;Y&apos;: Y.flatten()})
train_df, test_df = train_test_split(df, test_size=0.2, random_state=42)
X_train = train_df[[&apos;X&apos;]]
y_train = train_df[&apos;Y&apos;]
X_test = test_df[[&apos;X&apos;]]
y_test = test_df[&apos;Y&apos;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Training the model&lt;/h4&gt;
&lt;p&gt;For training the model, we can get the model from scikit
learn library and use the dataset we generated above.
We can also check the &lt;code&gt;mse&lt;/code&gt; to get an idea of the model&apos;s accuracy.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pipeline = PMMLPipeline([
    (&quot;regressor&quot;, LinearRegression())
])

pipeline.fit(X_train, y_train)

y_pred = pipeline.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print(f&quot;Mean Squared Error: {mse}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Saving the pmml file&lt;/h4&gt;
&lt;p&gt;If you are satisfied by the performance of your model, you can
export the model as a pmml file.
We will save the model with the name, &lt;code&gt;lr_model.pmml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sklearn2pmml(pipeline, &quot;lr_model.pmml&quot;, with_repr = True)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using the model file&lt;/h2&gt;
&lt;p&gt;The main step of focus in this blog is using the model in C++ program.
For this, you will need to isntall the &lt;code&gt;cPMML&lt;/code&gt; library.&lt;/p&gt;
&lt;h3&gt;Installing cPMML&lt;/h3&gt;
&lt;p&gt;To install the libray in your system, you just need to run the below command.
This will run &lt;code&gt;cmake&lt;/code&gt;, so you should have &lt;code&gt;cmake&lt;/code&gt; installed in your system.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;git clone https://github.com/AmadeusITGroup/cPMML.git &amp;amp;&amp;amp; cd cPMML &amp;amp;&amp;amp; ./install.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;For Mac M1&lt;/h4&gt;
&lt;p&gt;I ran into some problems while installing this on Mac M1.
Here are the steps to install this effortlessly.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ensure you have the latest version of &lt;code&gt;cmake&lt;/code&gt; installed in your system.&lt;/li&gt;
&lt;li&gt;You can edit the &lt;code&gt;install.sh&lt;/code&gt; script to remove &lt;code&gt;-j 4&lt;/code&gt; flag from the &lt;code&gt;cmake -j 4 ..&lt;/code&gt; command. This will turn off the multi processing.&lt;/li&gt;
&lt;li&gt;The last line of the &lt;code&gt;install.sh&lt;/code&gt; script is &lt;code&gt;sudo ldconfig&lt;/code&gt;. Change this to &lt;code&gt;sudo update_dyld_shared_cache&lt;/code&gt;. This installs the &lt;code&gt;.dylib&lt;/code&gt; or &lt;code&gt;.so&lt;/code&gt; library files to proper destination.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Running the predictions&lt;/h3&gt;
&lt;h4&gt;Include the library&lt;/h4&gt;
&lt;p&gt;The first thing is to import the library.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#include &quot;cPMML.h&quot;
#include &amp;lt;iostream&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Load the model&lt;/h4&gt;
&lt;p&gt;Then you can load the model.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;int main() {
  cpmml::Model model(&quot;lr_model.pmml&quot;);
  return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Start predictions&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;cPMML&lt;/code&gt; library takes input as an unordered_map of strings.
For us, there is only one input which is &lt;code&gt;X&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;int main() {
  cpmml::Model model(&quot;lr_model.pmml&quot;);

  // This shoule yield a value close to 1
  std::unordered_map&amp;lt;std::string, std::string&amp;gt; input1 = {
    {&quot;X&quot;, &quot;0&quot;}
  };


  // This should yield a value close to 21
  std::unordered_map&amp;lt;std::string, std::string&amp;gt; input2 = {
    {&quot;X&quot;, &quot;10&quot;}
  };

  std::cout&amp;lt;&amp;lt;&quot;X = 0 Y = &quot;&amp;lt;&amp;lt;model.predict(input1)&amp;lt;&amp;lt;&apos;\n&apos;;
  std::cout&amp;lt;&amp;lt;&quot;X = 10 Y = &quot;&amp;lt;&amp;lt;model.predict(input2)&amp;lt;&amp;lt;&apos;\n&apos;;
  
  return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Compilation&lt;/h4&gt;
&lt;p&gt;You can compile the code by including the &lt;code&gt;cPMML&lt;/code&gt; library.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; g++ -std=c++11 predict.cpp -o predict.o -lcPMML
&amp;gt; ./predict.o
X = 0 Y = 0.967265
X = 10 Y = 21.369305
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog, we saw how to store your model as a &lt;code&gt;PMML&lt;/code&gt; file and load it in &lt;code&gt;C++&lt;/code&gt; using &lt;code&gt;cPMML&lt;/code&gt; library.
You can view the code for the above &lt;a href=&quot;https://github.com/UnresolvedCold/prediction-pmml-using-cpmml-in-cpp.git&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Embedded Jetty server with handlers for legacy Java applications</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-09-02-embedded-jetty-handlers/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-09-02-embedded-jetty-handlers/</guid><description>A simple way to embed a Jetty server in a Java application and add handlers for different requests</description><pubDate>Fri, 01 Sep 2023 18:30:00 GMT</pubDate><content:encoded>&lt;p&gt;import PlantUML from &apos;../../components/PlantUML.astro&apos;&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://eclipse.dev/jetty/&quot;&gt;Jetty&lt;/a&gt; is a very powerful yet lightweight &lt;code&gt;Java&lt;/code&gt; library since ages that helps you create servers and clients for HTTP (literally all the versions), Web Sockets, OSGI, JMX, JAAS and much more.&lt;/p&gt;
&lt;p&gt;This post will deal with embedding a Jetty server in a Java application and adding handlers for different requests.
I&apos;m assuming you have a basic understanding of Java and Maven and have familiarity with different HTTP methods like GET, POST, PUT, DELETE, etc.&lt;/p&gt;
&lt;h2&gt;Jetty Server Architecture&lt;/h2&gt;
&lt;p&gt;An incoming request is handled using 4 components - Threadpool, Connectors, Handlers and Server.
&lt;code&gt;Server&lt;/code&gt; is the core of the system that handles the management of servers and the entire lifecycle of the server.
&lt;code&gt;Connectors&lt;/code&gt; help in accepting various requests over different protocols like HTTP, HTTPS, etc.
&lt;code&gt;Handlers&lt;/code&gt; are the components that process the incoming request.
&lt;code&gt;Threadpools&lt;/code&gt; are like tiny workers that make the system multi-threaded and help in handling multiple requests at the same time.&lt;/p&gt;
&lt;p&gt;An incoming request is first accepted at the &lt;code&gt;Connector&lt;/code&gt;.
Then it is sent to the &lt;code&gt;Server&lt;/code&gt; which connects it to the &lt;code&gt;Handler&lt;/code&gt; where the main response is generated.
And the response is generated using a particular thread as per &lt;code&gt;Threadpool&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&amp;lt;!plantuml/&amp;gt;&lt;/p&gt;
&lt;h2&gt;Using Jetty in a Java application&lt;/h2&gt;
&lt;h3&gt;New Java App&lt;/h3&gt;
&lt;p&gt;First, let&apos;s create a new &lt;code&gt;Java&lt;/code&gt; app using &lt;code&gt;Maven&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; mvn archetype:generate \
-DgroupId = com.schwiftycold.poc \
-DartifactId = poc_jetty_server \
-DarchetypeArtifactId = maven-archetype-quickstart \
-DinteractiveMode = false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add the following properties to your &lt;code&gt;POM&lt;/code&gt; file.
This will set the &lt;code&gt;Java&lt;/code&gt; version and language encoding for the app.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  &amp;lt;properties&amp;gt;
    &amp;lt;project.build.sourceEncoding&amp;gt;UTF-8&amp;lt;/project.build.sourceEncoding&amp;gt;
    &amp;lt;maven.compiler.source&amp;gt;20&amp;lt;/maven.compiler.source&amp;gt;
    &amp;lt;maven.compiler.target&amp;gt;20&amp;lt;/maven.compiler.target&amp;gt;
  &amp;lt;/properties&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Add Jetty dependency&lt;/h3&gt;
&lt;p&gt;Add the following dependency in your &lt;code&gt;pom.xml&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;dependencies&amp;gt;
  &amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.eclipse.jetty&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;jetty-server&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;11.0.3&amp;lt;/version&amp;gt;
  &amp;lt;/dependency&amp;gt;
  ...
&amp;lt;/dependencies&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Main server&lt;/h3&gt;
&lt;p&gt;The main component of the system is the server class.
This class will create a server context and register connectors and handlers.
Let&apos;s call this class &lt;code&gt;MainServer&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;Threadpool&lt;/h4&gt;
&lt;p&gt;There are different kinds of Threadpool offered by Jetty like &lt;code&gt;QueuedThreadPool&lt;/code&gt;, &lt;code&gt;ExecutorThreadPool&lt;/code&gt;, &lt;code&gt;ScheduledThreadPool&lt;/code&gt;, etc.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;QueuedThreadPool&lt;/code&gt; maintains a fixed number of threads and a queue to manage incoming requests.
When a request comes and the thread is busy then it is added to the queue and will be picked when a thread is available.
And, due to its nature, this is widely used for handling &lt;code&gt;http&lt;/code&gt; requests.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ScheduledThreadPool&lt;/code&gt; extends &lt;code&gt;QueuedThreadPool&lt;/code&gt; to handle scheduled tasks.
It provides us with a scheduler to execute tasks at a certain interval.
It is generally used for scheduling background tasks.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ExecutorThreadPool&lt;/code&gt; enables you to use custom &lt;code&gt;Executor&lt;/code&gt; as the Threadpool for Jetty.&lt;/p&gt;
&lt;p&gt;Here, we will be using &lt;code&gt;QueuedThreadPool&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ThreadPool threadPool = new QueuedThreadPool();
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Server&lt;/h4&gt;
&lt;p&gt;The server is the essential component that manages the connectors and handlers.
It takes the Threadpool to create a new server instance.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Server server = new Server(threadPool);
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Connector&lt;/h4&gt;
&lt;p&gt;A connector allows us to accept a variety of different protocols like &lt;code&gt;HTTP&lt;/code&gt;, &lt;code&gt;HTTPS&lt;/code&gt;, &lt;code&gt;Unix domain socket&lt;/code&gt;, etc.&lt;/p&gt;
&lt;h5&gt;HTTP&lt;/h5&gt;
&lt;p&gt;A simple &lt;code&gt;HTTP&lt;/code&gt; connector can be initialized using the &lt;code&gt;ServerConnector&lt;/code&gt; class.
And you can change the port to listen to using, &lt;code&gt;setPort&lt;/code&gt; function.
By default, the port is &lt;code&gt;8080&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ServerConnector connector = new ServerConnector(server);
connector.setPort(9120);
server.setConnectors(new Connector[]{connector});

&lt;/code&gt;&lt;/pre&gt;
&lt;h5&gt;HTTPS&lt;/h5&gt;
&lt;p&gt;For using &lt;code&gt;HTTPS&lt;/code&gt; you&apos;ll need to register the &lt;code&gt;SSL/TLS&lt;/code&gt; certificate.
We won&apos;t be using this here but a quick look into this will give us a glance of the infinite possibilities.&lt;/p&gt;
&lt;p&gt;First, you will need the sslContextFactory which defines your &lt;code&gt;SSL/TLS&lt;/code&gt; configurations.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SslContextFactory sslContextFactory = new SslContextFactory();
sslContextFactory.setKeyStorePath(&quot;/path/to/keystore.jks&quot;);
sslContextFactory.setKeyStorePassword(&quot;keystore-password&quot;);
sslContextFactory.setKeyManagerPassword(&quot;key-password&quot;);
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;ServerConnector httpsConnector = new ServerConnector(
    server,
    new SslConnectionFactory(sslContextFactory, &quot;http/1.1&quot;)
);
httpsConnector.setPort(8443);
server.setConnectors(new Connector[] {httpsConnector});
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Handlers&lt;/h4&gt;
&lt;p&gt;Handlers are the ones that will be creating or response.
For this, let&apos;s create a new singleton class.
Let&apos;s set this to &lt;code&gt;null&lt;/code&gt; for now.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;server.setHandler(null);
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Final code&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;public class MainServer {
  public static void startServer()  {
    try {
      ThreadPool threadPool = new QueuedThreadPool();

      Server server = new Server(threadPool);

      ServerConnector connector = new ServerConnector(server);
      connector.setPort(9120);
      
      server.setConnectors(new Connector[]{connector});
      server.setHandler(null);

      server.start();
      server.join();
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Handler class&lt;/h3&gt;
&lt;p&gt;Handler class is where we will write our logic to process the request.
So this requires a special attention.&lt;/p&gt;
&lt;p&gt;Let&apos;s create a new class that extends &lt;code&gt;AbstractHandler&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;Extend AbstractHandler&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;AbstractHandler&lt;/code&gt; required us to implement a &lt;code&gt;handle&lt;/code&gt; function.
This function is triggered on every new request.&lt;/p&gt;
&lt;p&gt;The handle method uses four parameters.
The &lt;code&gt;target&lt;/code&gt; parameter denotes the endpoint that was triggered.
If we trigger the URL, &lt;code&gt;http://localhost:9120/hi&lt;/code&gt; then the target will take the value as &lt;code&gt;/hi&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;baseRequest&lt;/code&gt; and &lt;code&gt;request&lt;/code&gt; denote the same thing but in a different context.
&lt;code&gt;baseRequest&lt;/code&gt; is Jetty specific whereas &lt;code&gt;request&lt;/code&gt; donates the servlet-specific APIs.
Here, we won&apos;t be using the servlet APIs.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;response&lt;/code&gt; is the processed result generated by the server which will be sent to the client.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public class MainHandler extends AbstractHandler{

  @Override
  public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response)
      throws IOException, ServletException {
        System.out.println(&quot;target: &quot; + target);
        System.out.println(&quot;baseRequest: &quot; + baseRequest);
        System.out.println(&quot;request: &quot; + request);
        System.out.println(&quot;response: &quot; + response);     
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output of the above code would be something as follows on triggering with &lt;code&gt;curl&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; curl -X POST &quot;http://localhost:9120/hi&quot; -d &quot;p=value&quot;

target: /hi
baseRequest: Request(POST http://localhost:9120/hi)@57ffc3ca
request: Request(POST http://localhost:9120/hi)@57ffc3ca
response: HTTP/1.1 200
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, you can see the target as &lt;code&gt;/hi&lt;/code&gt;, the request denotes the method call and reference to the request object and the response is just 200 which denotes a success.
We can manipulate this information to create responses based on different inputs.&lt;/p&gt;
&lt;h4&gt;Manage different methods&lt;/h4&gt;
&lt;p&gt;We can get the request method using the response object by calling the function, &lt;code&gt;getMethod&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public void handle(...) {
    if (&quot;POST&quot;.equalsIgnoreCase(baseRequest.getMethod())) {
      System.out.println(&quot;POST request received&quot;);
    }
    else if (&quot;GET&quot;.equalsIgnoreCase(baseRequest.getMethod())) {
      System.out.println(&quot;GET request received&quot;);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, calling curl as above will print &lt;code&gt;POST request received&lt;/code&gt; on the server side.&lt;/p&gt;
&lt;h4&gt;Extract the query parameters&lt;/h4&gt;
&lt;p&gt;The query parameter and its values can be extracted from the base request object by using the &lt;code&gt;getParameterNames&lt;/code&gt; method.
Here, I&apos;m parsing the values and storing them in a &lt;code&gt;Map&lt;/code&gt; for accessing them later.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;if (&quot;POST&quot;.equalsIgnoreCase(baseRequest.getMethod())) {
  System.out.println(&quot;POST request received&quot;);
  Map&amp;lt;String, String&amp;gt; queryParams = new HashMap&amp;lt;&amp;gt;();

  for (Enumeration&amp;lt;String&amp;gt; e = request.getParameterNames(); e.hasMoreElements();) {
    String name = e.nextElement();
    String[] values = request.getParameterValues(name);
    for (String value : values) {
      queryParams.put(name, value);
    }
  }

  System.out.println(&quot;Query params: &quot; + queryParams);
  ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, you can see the below response on the server side.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;POST request received
Query params: {p=value}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Generating the result&lt;/h4&gt;
&lt;p&gt;The last part is generating the result for specific triggers.
This is done by using &lt;code&gt;getWriter&lt;/code&gt; function to write to the response.
You can also set the status of the response using &lt;code&gt;setStatus&lt;/code&gt; method.
You can create more &lt;code&gt;if&lt;/code&gt; statements for each endpoint.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;try {
  if (target.startsWith(&quot;/hi&quot;)) {
    response.setStatus(HttpServletResponse.SC_OK);
    response.getWriter().println(&quot;Hello World&quot;);
    baseRequest.setHandled(true);
  }
} catch (Exception e) {
  e.printStackTrace();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Final handle method&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;  public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response)
      throws IOException, ServletException {
    if (&quot;POST&quot;.equalsIgnoreCase(baseRequest.getMethod())) {
      System.out.println(&quot;POST request received&quot;);
      Map&amp;lt;String, String&amp;gt; queryParams = new HashMap&amp;lt;&amp;gt;();

      for (Enumeration&amp;lt;String&amp;gt; e = request.getParameterNames(); e.hasMoreElements();) {
        String name = e.nextElement();
        String[] values = request.getParameterValues(name);
        for (String value : values) {
          queryParams.put(name, value);
        }
      }

      System.out.println(&quot;Query params: &quot; + queryParams);

      System.out.println(&quot;Target: &quot; +target.startsWith(&quot;/hi&quot;));

      try {
        if (target.startsWith(&quot;/hi&quot;)) {
          System.out.println(&quot;Target: &quot; +target);
          response.setStatus(HttpServletResponse.SC_OK);
          response.getWriter().println(&quot;Hello World&quot;);
          baseRequest.setHandled(true);
        }
      }
      catch (Exception e) {
        e.printStackTrace();
      }

    } else if (&quot;GET&quot;.equalsIgnoreCase(baseRequest.getMethod())) {
      System.out.println(&quot;GET request received&quot;);
    }
  }

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Connect Handler and  MainServer&lt;/h4&gt;
&lt;p&gt;Before testing your new server, you&apos;ll also need to add the handler which can be done using the server&apos;s &lt;code&gt;setHandler&lt;/code&gt; method.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;server.setHandler(new MainHandler());
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now calling the &lt;code&gt;hi&lt;/code&gt; endpoint will return &lt;code&gt;Hello World&lt;/code&gt; as a response.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; curl -X POST &quot;http://localhost:9120/hi&quot; -d &quot;p=value&quot;

Hello World
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this post, we saw how to configure &lt;code&gt;Jetty&lt;/code&gt; server and set handlers to generate responses for a particular endpoint trigger.
This looks like a long post but it revolves around just creating a &apos;Hello World&apos; server using &lt;code&gt;Jetty&lt;/code&gt;.
You can find the code for the above &lt;a href=&quot;https://github.com/UnresolvedCold/POC-Jetty-Server-Handlers&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Communication b/w Java (Maven) and Erlang (rebar3) using Jinterface</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-08-22-jinterface-erlang/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-08-22-jinterface-erlang/</guid><description>Jinterface is a way to make Java programs behave like Erlang nodes. This ensures seamless communication between Erlang and Java programs.</description><pubDate>Sat, 26 Aug 2023 09:38:06 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Jinterface&lt;/code&gt; is a way to make &lt;code&gt;Java&lt;/code&gt; programs behave like &lt;code&gt;Erlang&lt;/code&gt; nodes.
This ensures seamless communication between &lt;code&gt;Erlang&lt;/code&gt; and &lt;code&gt;Java&lt;/code&gt; programs.
&lt;code&gt;Jinterface&lt;/code&gt; exposes some APIs that you can use to create &lt;code&gt;Erlang&lt;/code&gt; data structures and send them to &lt;code&gt;Erlang&lt;/code&gt; nodes.&lt;/p&gt;
&lt;h2&gt;How to use Jinterface&lt;/h2&gt;
&lt;h3&gt;Installation&lt;/h3&gt;
&lt;p&gt;For using &lt;code&gt;Jinterface&lt;/code&gt; you need a &lt;code&gt;jar&lt;/code&gt; file provided by &lt;code&gt;Erlang&lt;/code&gt; called &lt;code&gt;OtpErlang.jar&lt;/code&gt;.
You can find the location of the &lt;code&gt;jar&lt;/code&gt; by opening &lt;code&gt;erlang&lt;/code&gt; shell and typing the below command.
This will provide you with the location of the jar file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; code:priv_dir(jinterface).
&quot;/usr/local/lib/erlang/lib/jinterface-1.13.2/priv&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;$ ls &quot;/usr/local/lib/erlang/lib/jinterface-1.13.2/priv&quot;
OtpErlang.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It may happen that the file isn&apos;t there. This is because &lt;code&gt;Jinterface&lt;/code&gt; is not installed by default.
In this case, you will need to install &lt;code&gt;Erlang&lt;/code&gt; from the source with &lt;code&gt;Java 8+&lt;/code&gt; installed on the path.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you have installed &lt;code&gt;Erlang&lt;/code&gt; using &lt;code&gt;homebrew&lt;/code&gt; on &lt;em&gt;macOS&lt;/em&gt; then it might not be there.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So first make sure, you have &lt;code&gt;Java&lt;/code&gt; runtime installed. If not, then please install this first.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ java -version
openjdk version &quot;20.0.1&quot; 2023-04-18
OpenJDK Runtime Environment (build 20.0.1+9-29)
OpenJDK 64-Bit Server VM (build 20.0.1+9-29, mixed mode, sharing)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Remove your old installation of &lt;code&gt;Erlang&lt;/code&gt; if you have any.
Then clone the &lt;code&gt;Erlang&lt;/code&gt; repository and build it from the source.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git clone https://github.com/erlang/otp.git
$ cd otp
$ git checkout maint-25
$ ./configure &amp;amp;&amp;amp; make &amp;amp;&amp;amp; make install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Also, verify if the jar has been installed correctly as we did earlier.&lt;/p&gt;
&lt;h3&gt;Creating a Java node&lt;/h3&gt;
&lt;h4&gt;New Maven Project&lt;/h4&gt;
&lt;p&gt;Let&apos;s first create a new &lt;code&gt;maven&lt;/code&gt; project.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mvn archetype:generate -DgroupId=com.example \
    -DartifactId=myproject \
    -DarchetypeArtifactId=maven-archetype-quickstart \
    -DinteractiveMode=false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And you might also want to update the &lt;code&gt;Java&lt;/code&gt; version and &lt;code&gt;encoding&lt;/code&gt; by adding/updating the below properties in &lt;code&gt;pom.xml&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;properties&amp;gt;
    &amp;lt;maven.compiler.source&amp;gt;20&amp;lt;/maven.compiler.source&amp;gt;
    &amp;lt;maven.compiler.target&amp;gt;20&amp;lt;/maven.compiler.target&amp;gt;
        &amp;lt;project.build.sourceEncoding&amp;gt;UTF-8&amp;lt;/project.build.sourceEncoding&amp;gt;
    &amp;lt;project.reporting.outputEncoding&amp;gt;UTF-8&amp;lt;/project.reporting.outputEncoding&amp;gt;
&amp;lt;/properties&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Adding Jinterface dependency&lt;/h4&gt;
&lt;p&gt;The easiest way is to install your &lt;code&gt;jar&lt;/code&gt; in &lt;code&gt;m2&lt;/code&gt; and add it to your dependency.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mvn install:install-file \
   -Dfile=/usr/local/lib/erlang/lib/jinterface-1.13.2/priv/OtpErlang.jar \
   -DgroupId=com.ericsson.otp \
   -DartifactId=erlang \
   -Dversion=1.13.2 \
   -Dpackaging=jar \
   -DgeneratePom=true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then add the dependency in &lt;code&gt;pom.xml&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.ericsson.otp&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;erlang&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.13.2&amp;lt;/version&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Creating a Java node&lt;/h4&gt;
&lt;p&gt;This is the fun part.
Each &lt;code&gt;Java&lt;/code&gt; app will contain a node and a mailbox.
We can send the message to the node on a particular mailbox and the node will receive the message and process it exactly like &lt;code&gt;Erlang&lt;/code&gt; nodes.&lt;/p&gt;
&lt;p&gt;But first, import the required packages.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import com.ericsson.otp.erlang.*;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s create a node called, &apos;java_node&apos; and a mailbox called, &apos;java_mailbox&apos;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;OtpNode node = new OtpNode(&quot;java_node&quot;);
OtpMbox mbox = node.createMbox(&quot;java_mailbox&quot;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, let&apos;s say we will be sending the node a tuple containing the &lt;code&gt;Pid&lt;/code&gt; of the Erlang node and a message atom called &lt;code&gt;hello&lt;/code&gt;.
On receiving &lt;code&gt;{Pid, hello}&lt;/code&gt; as the message, we will send &apos;world&apos; as a message to the calling node.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;mbox.receive()&lt;/code&gt; function will pause the execution of the program until it receives a message.
We are typecasting the received message to &lt;code&gt;OtpErlangTuple&lt;/code&gt; because we know that the message will be a tuple and let&apos;s extract the &lt;code&gt;Pid&lt;/code&gt; and the message atom from the tuple.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Get the message
OtpErlangTuple erlTuple = (OtpErlangTuple) mbox.receive();

// Parse the message
OtpErlangPid fromPid = (OtpErlangPid) erlTuple.elementAt(0);
OtpErlangAtom atom = (OtpErlangAtom) erlTuple.elementAt(1);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we will check if the message is &lt;code&gt;hello&lt;/code&gt; and if that is the case, we will send &lt;code&gt;world&lt;/code&gt; as a message to the calling node.
Messages can be sent using the &lt;code&gt;mbox.send()&lt;/code&gt; function.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;if (atom.atomValue().equals(&quot;hello&quot;)) {
    // Create the reply message
    OtpErlangAtom replyAtom = new OtpErlangAtom(&quot;world&quot;);
    OtpErlangObject[] replyElements = {new OtpErlangAtom(&quot;ok&quot;), replyAtom};
    OtpErlangTuple replyTuple = new OtpErlangTuple(replyElements);
    
    // This will send the message to the calling node with Pid as fromPid
    mbox.send(fromPid, replyTuple);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Alive function&lt;/h4&gt;
&lt;p&gt;You might want to create a health checker function to get a reply on pinging the node.
This is done by creating a &lt;code&gt;isAlive&lt;/code&gt; function and it comes in handy while debugging and in code as a sanity check.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public boolean isAlive() {
        return true;
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Final Java code&lt;/h4&gt;
&lt;p&gt;Your final &lt;code&gt;Java&lt;/code&gt; code will look something like this.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package com.example;

import com.ericsson.otp.erlang.*;

public class HelloWorld {
    public boolean isAlive() {
            return true;
        }

    public static void main(String[] args) throws Exception {
         OtpNode node = new OtpNode(&quot;java_node&quot;);
         OtpMbox mbox = node.createMbox(&quot;java_mailbox&quot;);
         System.out.println(&quot;Node Created. Now, you can communicate with this node.&quot;);
         OtpErlangTuple erlTuple = (OtpErlangTuple) mbox.receive();
         OtpErlangPid fromPid = (OtpErlangPid) erlTuple.elementAt(0);
         OtpErlangAtom atom = (OtpErlangAtom) erlTuple.elementAt(1);
         if (atom.atomValue().equals(&quot;hello&quot;)) {
             OtpErlangAtom replyAtom = new OtpErlangAtom(&quot;world&quot;);
             OtpErlangObject[] replyElements = {new OtpErlangAtom(&quot;ok&quot;), replyAtom};
             OtpErlangTuple replyTuple = new OtpErlangTuple(replyElements);
             mbox.send(fromPid, replyTuple);
         }
    }
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can verify if your &lt;code&gt;Java&lt;/code&gt; app is being registered as a node using the below commands in the &lt;code&gt;Erlang&lt;/code&gt; shell.
As we have implemented the &lt;code&gt;isAlive&lt;/code&gt; function, we can also &lt;code&gt;ping&lt;/code&gt; the node.
A return value of &lt;code&gt;pong&lt;/code&gt; means success and &lt;code&gt;pang&lt;/code&gt; means failure.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ erl -sname client
&amp;gt; net_adm:names().
{ok,[{&quot;java_node&quot;,59873}]}

&amp;gt; net_adm:ping(java_node@GGN002262).       
pong
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Erlang Node&lt;/h3&gt;
&lt;h4&gt;Erlang program&lt;/h4&gt;
&lt;p&gt;On the &lt;code&gt;Erlang&lt;/code&gt; side we can just send the message to the &lt;code&gt;java_node&lt;/code&gt; and wait for a message.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;GGN002262&lt;/code&gt; part of my node is my hostname. Use the &lt;code&gt;hostname&lt;/code&gt; command to get yours.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code&gt;-module(client).
-export([start/0]).

start() -&amp;gt;
  {java_mailbox, &apos;java_node@GGN002262&apos;} ! {self(), hello},
  receive

  {ok, Res} -&amp;gt;
     io:format(&quot;Java says: ~p~n&quot;, [Res])
  end.
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Run a distributed service&lt;/h4&gt;
&lt;p&gt;You will need to start the &lt;code&gt;Erlang&lt;/code&gt; runtime as a distributed service.
This can be done using &lt;code&gt;-sname&lt;/code&gt; flag.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ erl -sname client
1&amp;gt; c(client).
2&amp;gt; client:start().
Java says: world
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Starting EPMD&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;epmd&lt;/code&gt; program comes prepacked with the &lt;code&gt;Erlang&lt;/code&gt; distribution.
This is the &lt;code&gt;Erlang&lt;/code&gt; port manager.
This is required for registering nodes.&lt;/p&gt;
&lt;p&gt;If &lt;code&gt;epmd&lt;/code&gt; is not running then you will get an error something like this, &lt;code&gt;Nameserver not responding on GGN002262 when publishing java_node&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To start the &lt;code&gt;epmd&lt;/code&gt; server, you just need to run &lt;code&gt;epmd&lt;/code&gt; from your shell.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Start epmd server in the background
$ epmd&amp;amp;

# Run the Java app
&amp;gt; java -jar my-app.jar

# Verify if the app is getting registered.
$ epmd -names
epmd: up and running on port 4369 with data:
name java_node at port 59873
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If your &lt;code&gt;Java&lt;/code&gt; app is registered, you will get the above output.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Here, we discussed how to use &lt;code&gt;Jinterface&lt;/code&gt; to communicate with &lt;code&gt;Erlang&lt;/code&gt; using &lt;code&gt;Java&lt;/code&gt;.
We saw that &lt;code&gt;Java&lt;/code&gt; programs can behave as &lt;code&gt;Erlang&lt;/code&gt; nodes and can communicate with each other through message passing.
We also learned about &lt;code&gt;epmd&lt;/code&gt;, the Erlang port manager.&lt;/p&gt;
&lt;p&gt;One thing to consider while using &lt;code&gt;Jinterface&lt;/code&gt; is it simulates the &lt;code&gt;Java&lt;/code&gt; node as an &lt;code&gt;Erlang&lt;/code&gt; node, which makes the communication a little bit slow hence this approach should not be taken for a system that requires high-frequency message transfer.&lt;/p&gt;
&lt;p&gt;It is also important to note that by default, this process of message transfer is &lt;code&gt;async&lt;/code&gt; which means we will need to wait for &lt;code&gt;Java&lt;/code&gt; to process the result and send a message back.
There are ways to make this &lt;code&gt;synchronous&lt;/code&gt; by creating a mechanism to send a confirmation message from the client side on receiving the processed message.&lt;/p&gt;
&lt;p&gt;Overall, this is a great add-on to &lt;code&gt;Erlang&lt;/code&gt; that pumps the power of &lt;code&gt;Java&lt;/code&gt; into &lt;code&gt;Erlang&apos;s&lt;/code&gt; world.&lt;/p&gt;
</content:encoded></item><item><title>Docker Desktop vs Colima on Mac M1 for working with VSCode containers</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-08-18-analysis-docker-dsktop-colima-on-mac-m1/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-08-18-analysis-docker-dsktop-colima-on-mac-m1/</guid><description>A read/write speed comparison between Docker Desktop and Colima on Mac M1 for developing in devcontainers and VSCode.</description><pubDate>Fri, 18 Aug 2023 16:01:22 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I am assuming that you have a basic understanding of &lt;code&gt;docker&lt;/code&gt; and &lt;code&gt;devcontainers&lt;/code&gt; and have used them before.
As this post revolves around Mac M1, I&apos;d suggest not using this analysis for comparing other systems in general.&lt;/p&gt;
&lt;p&gt;VSCode &lt;code&gt;devcontainers&lt;/code&gt; are the new way of starting a project development environment.
They are a great way to get started with a project without having to install all the dependencies on your local machine.
This reduces the setup time and allows you to get started with the project right away.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Devcontainers&lt;/code&gt; are powered by Docker and VSCode.
VSCode provides the UI and Docker provides the containerization.
This means that you need to have Docker installed on your machine to use &lt;code&gt;devcontainers&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;While &lt;code&gt;docker&lt;/code&gt; is the main engine,
&lt;code&gt;Docker Desktop&lt;/code&gt; and &lt;code&gt;Colima&lt;/code&gt; are the two main options for creating a &lt;code&gt;docker&lt;/code&gt; environment on Mac.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Colima&lt;/code&gt; is an open-source alternative to &lt;code&gt;Docker Desktop&lt;/code&gt;.
While both software are free to use, &lt;code&gt;Docker Desktop&lt;/code&gt; requires companies to pay for a license if they have more than 250 employees.
And also, &lt;code&gt;Docker Desktop&lt;/code&gt; can be enhanced with lots of extensions which Colima can&apos;t.&lt;/p&gt;
&lt;p&gt;I am using &lt;code&gt;docker&lt;/code&gt; for mainly 2 things, creating production-level containers and using &lt;code&gt;devcontainers&lt;/code&gt; for development.
So a lot of &lt;code&gt;Docker Desktop&lt;/code&gt; extensions are not useful for me but the performance is.&lt;/p&gt;
&lt;p&gt;I analyzed the read/write performance of &lt;code&gt;Docker Desktop&lt;/code&gt; and &lt;code&gt;Colima&lt;/code&gt; for working with &lt;code&gt;devcontainers&lt;/code&gt; and here are the results.
I also compared the build time for my blog on both the software.&lt;/p&gt;
&lt;h2&gt;System Information&lt;/h2&gt;
&lt;p&gt;I am using a Macbook Air M1 with 16 GB RAM.
I have allocated 8 GB RAM and 4 CPUs to Docker Desktop and Colima each.
I also kept the disk storage to 60 GB for both the software which would be enough for our testing.&lt;/p&gt;
&lt;h3&gt;Docker Desktop&lt;/h3&gt;
&lt;p&gt;I&apos;m using Docker Desktop &lt;code&gt;v24.2&lt;/code&gt; which comes with a &lt;code&gt;60%&lt;/code&gt; improvement in read/write performance (as they say).
It&apos;s using Apple&apos;s &lt;code&gt;VZ&lt;/code&gt; for virtualization and the net says it is far more optimized than &lt;code&gt;Qemu&lt;/code&gt;.
I have no extensions installed on Docker Desktop.
And no other container was running at the time of analysis.&lt;/p&gt;
&lt;h3&gt;Colima&lt;/h3&gt;
&lt;p&gt;The latest version of Colima is &lt;code&gt;v0.5.5&lt;/code&gt; at the time of writing this article.
It was released on May 2023.
For the comparison, I&apos;m using the &lt;code&gt;HEAD&lt;/code&gt; of the &lt;a href=&quot;https://github.com/abiosoft/colima&quot;&gt;Colima&lt;/a&gt; repository to capture any latest improvements.&lt;/p&gt;
&lt;p&gt;I tested Colima with both &lt;code&gt;Qemu&lt;/code&gt; and &lt;code&gt;VZ&lt;/code&gt; virtualization.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Using the &lt;code&gt;HEAD&lt;/code&gt; can be unstable compared to the release branches.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Analysis Procedure&lt;/h2&gt;
&lt;p&gt;I created a simple &lt;code&gt;Java&lt;/code&gt; app for writing files of size &lt;code&gt;1 MB&lt;/code&gt; to &lt;code&gt;1 GB&lt;/code&gt; and calculated the time it takes to write each of the files.
File sizes are &lt;code&gt;1 MB&lt;/code&gt;, &lt;code&gt;10 MB&lt;/code&gt;, &lt;code&gt;64 MB&lt;/code&gt;, &lt;code&gt;128 MB&lt;/code&gt;, &lt;code&gt;256 MB&lt;/code&gt;, &lt;code&gt;512 MB&lt;/code&gt;, and &lt;code&gt;1024 MB&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I created an arbitrary &lt;code&gt;1 GB&lt;/code&gt; file and accessed the random positions &lt;code&gt;1,000&lt;/code&gt; to &lt;code&gt;1,000,000&lt;/code&gt; times and calculated the average.&lt;/p&gt;
&lt;p&gt;I also calculated the time it takes to print lines on the console using &lt;code&gt;sout&lt;/code&gt;.
I printed &lt;code&gt;10,000&lt;/code&gt;, &lt;code&gt;100,000&lt;/code&gt;, and &lt;code&gt;1,000,000&lt;/code&gt; lines of &lt;code&gt;1&lt;/code&gt; to &lt;code&gt;1000&lt;/code&gt; characters each and calculated the average time.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can find the Java app &lt;a href=&quot;https://github.com/UnresolvedCold/poc-performance-devcontainer&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;My blog is made on &lt;code&gt;AstroJs&lt;/code&gt; which generates a static site on build.
The build process involves compressing the images, minifying the CSS and JS, MDX to MD conversion, HTML conversion and so on.
This uses a lot of read/write operations hence I think it would be a good test of the performance.
So I calculated the build time and the first render time of the blog on both systems to get a real feel of the performance.&lt;/p&gt;
&lt;h3&gt;Changing between Docker Desktop and Colima&lt;/h3&gt;
&lt;p&gt;I made sure to shut down Colima before starting Docker Desktop and vice versa.
Docker desktop can be switched on/off from the UI.&lt;/p&gt;
&lt;h3&gt;Commands Used&lt;/h3&gt;
&lt;h4&gt;Colima stop and delete settings&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;colima stop
colima delete
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Start Colima&lt;/h4&gt;
&lt;p&gt;To start Qemu mode&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;colima start --cpu 4 --memory 8 --arch aarch64 --vm-type qemu
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To start VZ mode&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;colima start --cpu 4 --memory 8 --arch aarch64 --vm-type=vz --vz-rosetta
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Results&lt;/h2&gt;
&lt;h3&gt;Write Performance&lt;/h3&gt;
&lt;h4&gt;Docker Desktop&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File size&lt;/th&gt;
&lt;th&gt;Duration (Worst)&lt;/th&gt;
&lt;th&gt;Duration (Best)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1 MB&lt;/td&gt;
&lt;td&gt;311 ms&lt;/td&gt;
&lt;td&gt;227 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10 MB&lt;/td&gt;
&lt;td&gt;1998 ms (2s)&lt;/td&gt;
&lt;td&gt;1943 ms (2 s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64 MB&lt;/td&gt;
&lt;td&gt;14433 ms (14 s)&lt;/td&gt;
&lt;td&gt;12016 ms (12 s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128 MB&lt;/td&gt;
&lt;td&gt;24745 ms (24 s)&lt;/td&gt;
&lt;td&gt;23455 ms (23 s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256 MB&lt;/td&gt;
&lt;td&gt;58937 ms (1 min)&lt;/td&gt;
&lt;td&gt;52252 ms (52 s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;512 MB&lt;/td&gt;
&lt;td&gt;114213 ms (1.9 min)&lt;/td&gt;
&lt;td&gt;110060 ms (1.8 min)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024 MB&lt;/td&gt;
&lt;td&gt;262955 ms (4.38)&lt;/td&gt;
&lt;td&gt;194817 ms (3.2 min)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Colima (Qemu)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File Size&lt;/th&gt;
&lt;th&gt;Duration (Worst)&lt;/th&gt;
&lt;th&gt;Duration (Best)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1 MB&lt;/td&gt;
&lt;td&gt;573 ms&lt;/td&gt;
&lt;td&gt;286 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10 MB&lt;/td&gt;
&lt;td&gt;3077 ms (3s)&lt;/td&gt;
&lt;td&gt;2345 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64 MB&lt;/td&gt;
&lt;td&gt;19087 ms (19 s)&lt;/td&gt;
&lt;td&gt;14116 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128 MB&lt;/td&gt;
&lt;td&gt;38191 ms (38 s)&lt;/td&gt;
&lt;td&gt;26096 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256 MB&lt;/td&gt;
&lt;td&gt;81071 ms (1.35 min)&lt;/td&gt;
&lt;td&gt;67062 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;512 MB&lt;/td&gt;
&lt;td&gt;151242 ms (2.52 min)&lt;/td&gt;
&lt;td&gt;159663 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024 MB&lt;/td&gt;
&lt;td&gt;293370 ms (4.89 min)&lt;/td&gt;
&lt;td&gt;301629 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Colima (VZ + Rosetta 2)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File size&lt;/th&gt;
&lt;th&gt;Duration (worst)&lt;/th&gt;
&lt;th&gt;Duration (best)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1 MB&lt;/td&gt;
&lt;td&gt;291 ms&lt;/td&gt;
&lt;td&gt;236 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10 MB&lt;/td&gt;
&lt;td&gt;2113 ms (2s)&lt;/td&gt;
&lt;td&gt;2199 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64 MB&lt;/td&gt;
&lt;td&gt;12453 ms (12 s)&lt;/td&gt;
&lt;td&gt;12262 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128 MB&lt;/td&gt;
&lt;td&gt;25315 ms (25 s)&lt;/td&gt;
&lt;td&gt;24603 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256 MB&lt;/td&gt;
&lt;td&gt;49837 ms (49 min)&lt;/td&gt;
&lt;td&gt;50475 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;512 MB&lt;/td&gt;
&lt;td&gt;101883 ms (1.69 min)&lt;/td&gt;
&lt;td&gt;100692 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024 MB&lt;/td&gt;
&lt;td&gt;198126 ms (3.30 min)&lt;/td&gt;
&lt;td&gt;200163 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Read Performance&lt;/h3&gt;
&lt;p&gt;Shockingly, the read performance of Docker Desktop is very bad compared to Colima with a peak of &lt;code&gt;13 reads/ms&lt;/code&gt;.
Colima with &lt;code&gt;Qemu&lt;/code&gt; has a peak of &lt;code&gt;729 reads/ms&lt;/code&gt; and Colima with &lt;code&gt;VZ&lt;/code&gt; has a peak of &lt;code&gt;705 reads/ms&lt;/code&gt;
which is almost comparable.&lt;/p&gt;
&lt;p&gt;Surprisingly, the read performance of &lt;code&gt;Qemu&lt;/code&gt; is better than &lt;code&gt;VZ&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;Docker Desktop&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;number of random reads&lt;/th&gt;
&lt;th&gt;Duration (Total)&lt;/th&gt;
&lt;th&gt;Average Speed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;80593 - 81941 ms&lt;/td&gt;
&lt;td&gt;12 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500,000&lt;/td&gt;
&lt;td&gt;38238 - 39058 ms&lt;/td&gt;
&lt;td&gt;12 - 13 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200,000&lt;/td&gt;
&lt;td&gt;15299 - 15368 ms&lt;/td&gt;
&lt;td&gt;12 - 13 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;7755 - 7895 ms&lt;/td&gt;
&lt;td&gt;12 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;726 - 876 ms&lt;/td&gt;
&lt;td&gt;11 - 12 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;81 - 107 ms&lt;/td&gt;
&lt;td&gt;9 - 12 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I ran this 5 times and even restarted the docker engine multiple times but the results were the same.
If anyone knows why this is happening, please let me know.&lt;/p&gt;
&lt;h4&gt;Colima (Qemu)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;number of random reads&lt;/th&gt;
&lt;th&gt;Duration (Total)&lt;/th&gt;
&lt;th&gt;Average Speed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;1415 - 1419 ms&lt;/td&gt;
&lt;td&gt;704 - 707 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500,000&lt;/td&gt;
&lt;td&gt;686 - 695 ms&lt;/td&gt;
&lt;td&gt;696 - 719 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200,000&lt;/td&gt;
&lt;td&gt;276 - 278 ms&lt;/td&gt;
&lt;td&gt;719 - 728 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;137 - 141 ms&lt;/td&gt;
&lt;td&gt;709 - 724 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;15 - 17 ms&lt;/td&gt;
&lt;td&gt;666 - 729 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;2 - 3 ms&lt;/td&gt;
&lt;td&gt;333 - 500 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Colima (VZ + Rosetta 2)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;number of random reads&lt;/th&gt;
&lt;th&gt;Duration (Total)&lt;/th&gt;
&lt;th&gt;Average Speed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;1493 - 1507 ms&lt;/td&gt;
&lt;td&gt;663 - 669 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500,000&lt;/td&gt;
&lt;td&gt;709 - 721 ms&lt;/td&gt;
&lt;td&gt;693 - 705 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200,000&lt;/td&gt;
&lt;td&gt;284 - 286 ms&lt;/td&gt;
&lt;td&gt;699 - 704 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;143 ms&lt;/td&gt;
&lt;td&gt;699 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;15 - 16 ms&lt;/td&gt;
&lt;td&gt;625 - 666 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;1 -2 ms&lt;/td&gt;
&lt;td&gt;500 - 1000 reads/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Print Performance&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;Docker Desktop&lt;/code&gt; has the best print performance of all the three with a peak of &lt;code&gt;4074 chars/ms&lt;/code&gt;.
&lt;code&gt;Colima&lt;/code&gt; with &lt;code&gt;VZ&lt;/code&gt; comes second with a peak of &lt;code&gt;2984 chars/ms&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;Docker Desktop&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;number of lines to print&lt;/th&gt;
&lt;th&gt;number of chars printed&lt;/th&gt;
&lt;th&gt;total time&lt;/th&gt;
&lt;th&gt;Avg (per line)&lt;/th&gt;
&lt;th&gt;Avg (per character)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;4,997,048&lt;/td&gt;
&lt;td&gt;1346 ms&lt;/td&gt;
&lt;td&gt;7 lines/ms&lt;/td&gt;
&lt;td&gt;3712 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;49.999,881&lt;/td&gt;
&lt;td&gt;12272 ms&lt;/td&gt;
&lt;td&gt;8 lines/ms&lt;/td&gt;
&lt;td&gt;4074 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;500,324,217&lt;/td&gt;
&lt;td&gt;122849 ms&lt;/td&gt;
&lt;td&gt;8 lines/ms&lt;/td&gt;
&lt;td&gt;4072 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Colima (Qemu)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;number of lines to print&lt;/th&gt;
&lt;th&gt;number of chars printed&lt;/th&gt;
&lt;th&gt;total time&lt;/th&gt;
&lt;th&gt;Avg (per line)&lt;/th&gt;
&lt;th&gt;Avg (per character)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;5,013,996&lt;/td&gt;
&lt;td&gt;2019 ms&lt;/td&gt;
&lt;td&gt;4 lines/ms&lt;/td&gt;
&lt;td&gt;2483 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;50,028,339&lt;/td&gt;
&lt;td&gt;19253 ms&lt;/td&gt;
&lt;td&gt;5 lines/ms&lt;/td&gt;
&lt;td&gt;2598 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;500,593,068&lt;/td&gt;
&lt;td&gt;180024 ms&lt;/td&gt;
&lt;td&gt;5 lines/ms&lt;/td&gt;
&lt;td&gt;2780 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Colima (VZ + Rosetta 2)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;number of lines to print&lt;/th&gt;
&lt;th&gt;number of chars printed&lt;/th&gt;
&lt;th&gt;total time&lt;/th&gt;
&lt;th&gt;Avg (per line)&lt;/th&gt;
&lt;th&gt;Avg (per character)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;5,030,577&lt;/td&gt;
&lt;td&gt;1752 ms&lt;/td&gt;
&lt;td&gt;5 lines/ms&lt;/td&gt;
&lt;td&gt;2871 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;td&gt;50,010,997&lt;/td&gt;
&lt;td&gt;16755 ms&lt;/td&gt;
&lt;td&gt;5 lines/ms&lt;/td&gt;
&lt;td&gt;2984 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1,000,000&lt;/td&gt;
&lt;td&gt;499,828,875&lt;/td&gt;
&lt;td&gt;169597 ms&lt;/td&gt;
&lt;td&gt;5 lines/ms&lt;/td&gt;
&lt;td&gt;2947 chars/ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Blog Performance&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;System&lt;/th&gt;
&lt;th&gt;Build time&lt;/th&gt;
&lt;th&gt;Initial rendering time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Host&lt;/td&gt;
&lt;td&gt;673 - 711 s&lt;/td&gt;
&lt;td&gt;42.69 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Colima (Qemu)&lt;/td&gt;
&lt;td&gt;817 - 836 s&lt;/td&gt;
&lt;td&gt;6.7 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Colima (VZ + Rosseta)&lt;/td&gt;
&lt;td&gt;671 - 924 s&lt;/td&gt;
&lt;td&gt;4.8 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker Desktop&lt;/td&gt;
&lt;td&gt;824 - 828 s&lt;/td&gt;
&lt;td&gt;5.6 s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Compared to the host, the build time is almost the same for all the systems but &lt;code&gt;Colima&lt;/code&gt; with &lt;code&gt;VZ&lt;/code&gt; is the best among them
with the fastest build comparable to the host system.
None of them came close to the host system in terms of rendering time (as expected).&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Docker Desktop&lt;/code&gt; can be a better option for apps that require a lot of printing but fewer Read/Write operations.
When it comes to extensive Read/Write operations,
&lt;code&gt;Colima&lt;/code&gt; with &lt;code&gt;VZ + Rosseta 2&lt;/code&gt; would be my choice because it gives you better write and print performance than &lt;code&gt;Qemu&lt;/code&gt;.
For my blog, I will be using &lt;code&gt;Colima&lt;/code&gt; with &lt;code&gt;VZ + Rosseta 2&lt;/code&gt; because it gives me the best build time and read/write performance.&lt;/p&gt;
</content:encoded></item><item><title>Syncing your Elfeed saves between multiple systems</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-08-15-elfeed-sync-between-multiple-systems/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-08-15-elfeed-sync-between-multiple-systems/</guid><description>Elfeed is a great RSS reader for Emacs, but it doesn&apos;t have a built-in way to sync your feeds between multiple systems. Here&apos;s how I do it.</description><pubDate>Tue, 15 Aug 2023 19:16:14 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I recently started using &lt;code&gt;Emacs&lt;/code&gt; with &lt;code&gt;Doom Emacs&lt;/code&gt; as my primary text editor.
I&apos;ve been using &lt;code&gt;VSCode&lt;/code&gt; for years, but I&apos;ve always been curious about &lt;code&gt;Emacs&lt;/code&gt; and I wanted to give it a try.
I&apos;m still exploring different modules, and I&apos;m really enjoying the beginner&apos;s keybindings&lt;code&gt;&amp;lt;SPC&amp;gt; h r r&lt;/code&gt; and &lt;code&gt;&amp;lt;SPC&amp;gt; q R&lt;/code&gt;.
I was in an endless loop of searching for a good RSS reader for my needs.
I tried some Android-based, web-based, and desktop-based readers, but I couldn&apos;t find one that satisfied my need.&lt;/p&gt;
&lt;p&gt;I wanted a reader that I can switch to easily in my free time.
I wanted a reader that I can sync b/w my devices (which doesn&apos;t include mobile devices as I don&apos;t use them for reading).
I wanted a reader that can tag my feeds according to my customizations.
And most important of all, I wanted a simple reader that doesn&apos;t have a lot of features that I don&apos;t need.&lt;/p&gt;
&lt;p&gt;I found &lt;code&gt;Elfeed&lt;/code&gt; to be the best RSS reader for my needs.
It&apos;s simple, it&apos;s fast, it&apos;s customizable, and it&apos;s easy to switch to whenever you&apos;re free.&lt;/p&gt;
&lt;p&gt;The only thing that I didn&apos;t like about &lt;code&gt;Elfeed&lt;/code&gt; is that it doesn&apos;t have a built-in way to sync your feeds between multiple systems.
But there are some workarounds that you can use to sync your feeds.&lt;/p&gt;
&lt;h2&gt;Data storage in Elfeed&lt;/h2&gt;
&lt;p&gt;Let&apos;s first understand how Elfeed stores your data.&lt;/p&gt;
&lt;h3&gt;Data storage location&lt;/h3&gt;
&lt;p&gt;By default, &lt;code&gt;Elfeed&lt;/code&gt; stores your data inside &lt;code&gt;~/.elfeed&lt;/code&gt; directory.
You can change this location by setting the &lt;code&gt;elfeed-db-directory&lt;/code&gt; variable.&lt;/p&gt;
&lt;p&gt;I&apos;m using &lt;code&gt;Doom Emacs&lt;/code&gt;, so I set this variable inside my &lt;code&gt;config.el&lt;/code&gt; file.
You can do the same with your installation depending on what package manager you use.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;(after! elfeed
  (setq elfeed-db-directory &quot;~/.elfeed-data&quot;))
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Data storage format&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;ls -la ~/.elfeed-data
drwxrwxrwx  106 shubham.kumar  1729907015    3392 Aug 15 22:28 data
-rwxrwxrwx@   1 shubham.kumar  1729907015  924723 Aug 15 22:52 index

ls -a ~/.elfeed-data/data
00    21    37  ...  f0
06    24    3c  ...  f2
09    27    3e  ...  ff

ls ~/.elfeed-data/data/00
00e8db47f3a5b93b0fbb9b4c31748f607ae7bae5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;index&lt;/code&gt; file is a binary file that contains the metadata of your feeds.
The metadata includes the title, link, tags, etc.
This means that the &lt;code&gt;index&lt;/code&gt; file contains the data that you see when you open &lt;code&gt;Elfeed&lt;/code&gt; in &lt;code&gt;Emacs&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;data&lt;/code&gt; directory contains the actual data of your feeds which are your list of blogs.
The &lt;code&gt;data&lt;/code&gt; directory contains the hash of the feed as the name of the file.&lt;/p&gt;
&lt;p&gt;And just like &lt;code&gt;git&lt;/code&gt; file storage, the &lt;code&gt;data&lt;/code&gt; directory contains subdirectories named with the
first two characters of the hash and the rest of the hash as a subdirectory inside that.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;For more detailed information about the data storage format, you can check &lt;a href=&quot;https://nullprogram.com/blog/2013/09/09/&quot;&gt;the docs by the creator, Chris Wellons&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;What to sync?&lt;/h2&gt;
&lt;p&gt;Looking at the storage format of &lt;code&gt;Elfeed&lt;/code&gt;, we can say that we can sync our
feeds by syncing the &lt;code&gt;index&lt;/code&gt; file and the &lt;code&gt;data&lt;/code&gt; directory.
But the &lt;code&gt;data&lt;/code&gt; file will be generated based on your feeds,
so we don&apos;t need to sync it if you are already syncing your list of blogs.&lt;/p&gt;
&lt;p&gt;Plus comparing the size of &lt;code&gt;org&lt;/code&gt; file (1.6K) and &lt;code&gt;data&lt;/code&gt; directory (3.3K),
we can see that the data directory is much larger than the org file.
So I think it&apos;s better to sync the &lt;code&gt;index&lt;/code&gt; file and the &lt;code&gt;org&lt;/code&gt; file that contains
your list of blogs than to sync the entire &lt;code&gt;elfeed-data&lt;/code&gt; directory.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ls -lh ~/.elfeed-data/
total 1808
drwxrwxrwx  106 shubham.kumar  1729907015   3.3K Aug 15 22:28 data
-rwxrwxrwx@   1 shubham.kumar  1729907015   903K Aug 15 22:52 index

ls -lh ~/Documents/org/elfeed.org
-rwxrwxrwx  1 shubham.kumar  1729907015   1.6K Aug 13 08:00 /Users/shubham.kumar/Documents/org/elfeed.org
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For me, I&apos;m using &lt;code&gt;org-mode&lt;/code&gt; to store my list of blogs that are being synced using version control.
So whenever I start using &lt;code&gt;Emacs&lt;/code&gt; on my other system, I always update the org files.
This ensures my list of blogs is always up to date.&lt;/p&gt;
&lt;p&gt;So the only thing I need now is to sync the metadata of my feeds.
And for that, I&apos;m using &lt;code&gt;Syncthing&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Syncing with Syncthing&lt;/h2&gt;
&lt;h3&gt;My current setup&lt;/h3&gt;
&lt;p&gt;I have 2 systems that I use regularly.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;My primary system is my &lt;code&gt;Linux Mint&lt;/code&gt; system which I use for personal stuff.&lt;/li&gt;
&lt;li&gt;My secondary system is my &lt;code&gt;MacBook Air M1&lt;/code&gt; which I use for work.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On both systems, I use &lt;code&gt;Doom Emacs&lt;/code&gt; and &lt;code&gt;Elfeed&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Initial syncing (Syncthing on both systems)&lt;/h3&gt;
&lt;p&gt;&amp;lt;!plantuml/&amp;gt;&lt;/p&gt;
&lt;p&gt;This ensures that whenever one of my systems is offline, the other system can sync with my phone.
And when both systems are online, they can sync with each other or my phone.
This will ensure that the &lt;code&gt;index&lt;/code&gt; file is always synced between the systems.&lt;/p&gt;
&lt;h4&gt;Problem with this approach&lt;/h4&gt;
&lt;p&gt;The problem with this approach is that I now have to dedicate some amount of storage to my &lt;code&gt;index&lt;/code&gt; file.
So I now have an unutilized &lt;code&gt;index&lt;/code&gt; file on my phone which is taking up space.
And I have to make sure that my phone is always connected to the internet and is always running &lt;code&gt;Syncthing&lt;/code&gt;.
This will also impact my battery life.&lt;/p&gt;
&lt;p&gt;I have to check how much effect this has on my battery life (will update the blog after calculating this).&lt;/p&gt;
&lt;h3&gt;Best approach (Using Raspberry Pi)&lt;/h3&gt;
&lt;p&gt;I haven&apos;t implemented this yet, but I think this is the best approach.
If I get any problem with my current setup, I&apos;ll implement this.&lt;/p&gt;
&lt;p&gt;I&apos;ll buy a Raspberry Pi and install &lt;code&gt;Syncthing&lt;/code&gt; on it.
I&apos;ll add the &lt;code&gt;~/.elfeed-data&lt;/code&gt; directory to the sync list while ignoring the &lt;code&gt;data&lt;/code&gt; directory.
I&apos;ll add both systems to sync with my Raspberry Pi and with each other.
I&apos;ll ensure that my Raspberry Pi is always connected to the internet and is always running &lt;code&gt;Syncthing&lt;/code&gt;.
I&apos;ll also have to port forward my Raspberry Pi so that I can access it from outside my network.
For this, I may need to talk to my ISP about static IP and port forwarding capabilities.&lt;/p&gt;
&lt;p&gt;Will write a blog about this if I implement this.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this post, I talked about how I sync my &lt;code&gt;Elfeed&lt;/code&gt; feeds between my systems.
I talked about the data storage format of &lt;code&gt;Elfeed&lt;/code&gt; and how I sync my feeds using &lt;code&gt;Syncthing&lt;/code&gt;.
I also talked about the problems with my current setup and how I can improve it by introducing a new system to sync files.
If you are using &lt;code&gt;Emacs&lt;/code&gt;, I highly recommend you try &lt;code&gt;Elfeed&lt;/code&gt; as your RSS reader.
And if you are already using &lt;code&gt;Elfeed&lt;/code&gt;, I hope this post helps you to sync your feeds between your systems.&lt;/p&gt;
</content:encoded></item><item><title>Inter-language communication between Erlang (rebar3) and Python using Erlport</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-08-13-erlang-python-erlport/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-08-13-erlang-python-erlport/</guid><description>Creating a simple program to communicate between Erlang and Python using Erlport</description><pubDate>Sat, 12 Aug 2023 23:40:40 GMT</pubDate><content:encoded>&lt;p&gt;When it comes to fault-tolerant systems, there are very few languages that can beat &lt;code&gt;Erlang&lt;/code&gt;.
While &lt;code&gt;Erlang&lt;/code&gt; is a great language for building fault-tolerant systems, it is not the best language for building AI/ML applications.
There are instances where you would want to use &lt;code&gt;Python&lt;/code&gt; for building AI/ML applications and &lt;code&gt;Erlang&lt;/code&gt; for building fault-tolerant systems.
In such cases, you would want to use &lt;code&gt;Erlang&lt;/code&gt; and &lt;code&gt;Python&lt;/code&gt; together.
This is where &lt;a href=&quot;http://erlport.org/&quot;&gt;Erlport library&lt;/a&gt; comes in.&lt;/p&gt;
&lt;p&gt;But before understanding Erlport, you need an understanding of &lt;code&gt;ports&lt;/code&gt; in Erlang.&lt;/p&gt;
&lt;h2&gt;Ports&lt;/h2&gt;
&lt;p&gt;In simple terms, &lt;code&gt;ports&lt;/code&gt; are used to communicate with external programs which are not written in Erlang.
So if you have a C program or a Python program and you want to communicate with it from Erlang, you can use &lt;code&gt;ports&lt;/code&gt;.
It&apos;s not a restriction that you can only use &lt;code&gt;ports&lt;/code&gt; to communicate with external programs, you can also use &lt;code&gt;ports&lt;/code&gt; to communicate with other Erlang nodes.
&lt;code&gt;Port&lt;/code&gt; provides you with a byte-oriented interface.
Which means, you send a list of bytes and receive a list of bytes during the communication.
This also means you need to handle the encoding and decoding of the data at both ends by yourself.&lt;/p&gt;
&lt;h2&gt;Erlport&lt;/h2&gt;
&lt;p&gt;Erlport is a library that internally uses the &lt;code&gt;port&lt;/code&gt; mechanism for communication and provides a wrapper around it.
This makes the integration of Erlang and Python very easy.
Right now, Erlport supports Python and Ruby.
In this article, we will be focusing on Erlang and Python implementation.&lt;/p&gt;
&lt;h3&gt;Installation&lt;/h3&gt;
&lt;p&gt;If you are using &lt;code&gt;rebar3&lt;/code&gt; for building your Erlang application, you can add the following dependency to your &lt;code&gt;rebar.config&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;...
{deps, [
    erlport
]}.
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or you can use the GitHub link for the dependency.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{erl_opts, [debug_info]}.
{deps, [
  {erlport, {git, &quot;https://github.com/erlport/erlport.git&quot;, {tag, &quot;v0.10.1&quot;}}}
]}.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&apos;ll also need to tell &lt;code&gt;Erlang&lt;/code&gt; to wait for the dependencies to be available before starting the app.
This can be done by adding the entry in &lt;code&gt;*.app.src&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; {applications,
   [kernel,
    stdlib,
    erlport
   ]},
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Python program&lt;/h3&gt;
&lt;p&gt;Let&apos;s create an &lt;code&gt;add&lt;/code&gt; program in Python and keep the file in the &lt;code&gt;priv&lt;/code&gt; directory.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def add(a, b):
    return a + b
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Erlang program&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;Erlang&lt;/code&gt; program will initialize the Python port using &lt;code&gt;python:start/1&lt;/code&gt; function.
And will call the &lt;code&gt;add&lt;/code&gt; function of the Python program using &lt;code&gt;python:call/4&lt;/code&gt; function.
The &lt;code&gt;python:call/4&lt;/code&gt; function takes the &lt;code&gt;Pid&lt;/code&gt; of the Python port, the module name, the function name, and the list of arguments to be passed to the function.&lt;/p&gt;
&lt;p&gt;Here, my &lt;code&gt;Python&lt;/code&gt; module is named, &lt;code&gt;math&lt;/code&gt; and the function is &lt;code&gt;add&lt;/code&gt; with &lt;code&gt;2&lt;/code&gt; and &lt;code&gt;4&lt;/code&gt; as parameters.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;-module(main).
-export([call_python/0]).

call_python() -&amp;gt;
  PythonCodePath = code:priv_dir(aiml_model_wrapper),
  {ok, P} = python:start([{python_path, PythonCodePath}, {python, &quot;python3&quot;}]),
  python:call(P, math, add, [2, 4]).

&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Running the program&lt;/h3&gt;
&lt;p&gt;Running the program is as simple as compiling and running the &lt;code&gt;Erlang&lt;/code&gt; program.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;reabr3 compile
reabr3 shell
1&amp;gt; main:call_python().
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this article, we saw how we can use &lt;code&gt;Erlang&lt;/code&gt; and &lt;code&gt;Python&lt;/code&gt; together using &lt;code&gt;Erlport&lt;/code&gt;.
This is a very simple example of how you can use &lt;code&gt;Erlang&lt;/code&gt; and &lt;code&gt;Python&lt;/code&gt; together.
You can use this to build complex systems where you can use &lt;code&gt;Erlang&lt;/code&gt; for building fault-tolerant systems and &lt;code&gt;Python&lt;/code&gt; for building AI/ML applications.
At &lt;a href=&quot;https://www.greyorange.com&quot;&gt;GreyOrange&lt;/a&gt; I created an Erlang wrapper for our AI/ML models using the same approach.&lt;/p&gt;
</content:encoded></item><item><title>Download files via REST API using chunked transfer encoding</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-05-26-chunked-file-transfer-protocol-rest-api/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-05-26-chunked-file-transfer-protocol-rest-api/</guid><description>Here, we will create a Python server that will send us a heuristic file in chunks and a Java client will read these files and download them over the REST API using chunked transfer encoding.</description><pubDate>Fri, 26 May 2023 01:04:53 GMT</pubDate><content:encoded>&lt;p&gt;Chunked transfer encoding is a protocol to send data in chunks over HTTP.
This allows us to transfer a large amount of data in chunks of small size.&lt;/p&gt;
&lt;p&gt;With chunked encoding, the server splits the response into a series of smaller &quot;chunks&quot; of data. Each chunk includes a size indicator followed by the actual chunk data. The size indicator specifies the length of the chunk data in bytes. The chunks are sent to the client one by one and can be processed by the client as they arrive.&lt;/p&gt;
&lt;p&gt;Here, we will create a Python server that will send us a heuristic file in chunks and a Java client will read these files and download them over the REST API.&lt;/p&gt;
&lt;h2&gt;Project structure&lt;/h2&gt;
&lt;p&gt;The project is a Maven project at root containing a sub-directory for our Python server.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;❯ tree .
.
|-- README.MD
|-- pom.xml
|-- python
|   |-- requirements.txt
|   `-- server.py
`-- src
    `-- main
        `-- java
            `-- com
                `-- gor
                    `-- poc
                        |-- App.java
                        `-- FileDownloader.java


&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Python server&lt;/h2&gt;
&lt;p&gt;This is a Flask server that exposes a &lt;code&gt;download&lt;/code&gt; route.&lt;/p&gt;
&lt;p&gt;For a POC, we will just send a particular file called &apos;heuristics&apos; whenever a request is made to the server.
The server will send the files in chunks of &lt;code&gt;1024&lt;/code&gt; which can be configured as per your need.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Define the route
@app.route(&apos;/download&apos;)
def download_file():
    # This is the file we want to download whenever a client requests
    file_dir = &apos;python/public/stream/data/&apos;
    file_name = &quot;heuristics.bin&quot;
    file_path = file_dir + file_name
    chunk_size = 1024
    
    # Create chunks and yield them whenever required
    def generate():
        # Chunks size of each chink of data     
        with open(file_path, &apos;rb&apos;) as file:
            while True:
                chunk = file.read(chunk_size)
                if not chunk:
                    break
                yield chunk

    response = Response(

        # stream_with_context is a Flask functionality to send files in chunks
        # You can read more about it here https://flask.palletsprojects.com/en/1.0.x/patterns/streaming/
        stream_with_context(generate()),
        mimetype=&apos;application/octet-stream&apos;
    )
    response.headers.set(&apos;Content-Disposition&apos;, &apos;attachment&apos;, filename=file_name)
    return response
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# Let&apos;s create a heuristic file for our server to send 
# We&apos;ll create a random file of size 1GB
dd if=/dev/urandom of=python/public/stream/data/heuristics.bin bs=1G count=1
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# The below commands only works on Mac OS
# Check the size of the file generated in bytes
stat -f &quot;%z&quot; python/public/stream/data/heuristics.bin
# 1073741824

# Let&apos;s check the md5 of the file
md5 -r python/public/stream/data/heuristics.bin | awk &apos;{print $1}&apos;
# c5b8959732d3359791bcd06ca5a92dc2
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# For linux, you can use the below code
# Check the size of the file generated in bytes
stat -c &quot;%s&quot; python/public/stream/data/heuristics.bin

# Check the md5 of the file
md5sum python/public/stream/data/heuristics.bin | awk &apos;{print $1}&apos;

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# Let&apos;s run the python server now
# We&apos;ll use a virtual environment to run the server (You can also use conda))
source .venv/bin/activate

# Install the dependencies
pip install -r python/requirements.txt

# Run the server (and keep it running)
python python/server.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can also use &lt;code&gt;curl&lt;/code&gt; to check the header.&lt;/p&gt;
&lt;p&gt;The presence of &lt;code&gt;Transfer-Encoding: chunked&lt;/code&gt; tells us that the server is using the chunked protocol to send the data.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# curl the header
# -I asks for headers only 
# -XGET says it to use GET method (can be omited as it is the default)
curl -I -XGET &quot;http://localhost:5000/download&quot;

# HTTP/1.1 200 OK
# Server: Werkzeug/2.3.4 Python/3.10.10
# Date: Fri, 26 May 2023 00:52:01 GMT
# Content-Type: application/octet-stream
# Content-Disposition: attachment; filename=heuristics.bin
# X-Chunk-Size: 1024
# Transfer-Encoding: chunked
# Connection: close
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also use &lt;code&gt;curl&lt;/code&gt; to download the file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Download the file in the current directory
# First let&apos;s remove the file if it exists
rm -rf heuristics.bin

# -O says to save the file with the same name as the server
# -J says to use the filename from the server
# -L says to follow redirects (for us it&apos;s optional as we are not using any redirects)
curl -O -J -L http://localhost:5000/download

# Also let&apos;s verify the file size and md5 of the downloaded file
stat -f &quot;%z&quot; heuristics.bin
# 1073741824

md5 -r heuristics.bin | awk &apos;{print $1}&apos;
# c5b8959732d3359791bcd06ca5a92dc2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can match the &lt;code&gt;md5&lt;/code&gt; of the server file and what we downloaded to check the successful file transfer.&lt;/p&gt;
&lt;h2&gt;Java Client&lt;/h2&gt;
&lt;p&gt;The code for downloading the file using &lt;code&gt;Java&lt;/code&gt; is located at &lt;code&gt;com.gor.poc.FileDownloader&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Below is the explanation of the code.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// This is the REST API which will serve you the file
String fileUrl = &quot;http://localhost:5000/download&quot;;
String savePath = &quot;&quot;; 

try {
    // Defining the URL
    URL url = new URL(fileUrl);
    HttpURLConnection connection = (HttpURLConnection) url.openConnection();
    connection.setRequestMethod(&quot;GET&quot;);

    int responseCode = connection.getResponseCode();
    if (responseCode == HttpURLConnection.HTTP_OK) {
        // Get the file name from the header
        String fileName = connection.getHeaderField(&quot;Content-Disposition&quot;);
        fileName = fileName.substring(fileName.lastIndexOf(&quot;=&quot;) + 1);
        String filePath = savePath + &quot;/&quot; + fileName;

        // Create an Input stream to download the file
        InputStream inputStream = connection.getInputStream();

        // This is where the file will be saved
        FileOutputStream outputStream = new FileOutputStream(&quot;./&quot;+filePath);

        // Read the data in chunks and save it to the file
        byte[] buffer = new byte[4096];
        int bytesRead;
        while ((bytesRead = inputStream.read(buffer)) != -1) {
            outputStream.write(buffer, 0, bytesRead);
        }
        
        outputStream.close();
        inputStream.close();

        System.out.println(&quot;File downloaded successfully!&quot;);
    } else {
        System.out.println(&quot;File download failed. Server returned response code: &quot; + responseCode);
    }

    connection.disconnect();
} catch (IOException e) {
    e.printStackTrace();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# You can run the java program using maven 
# First remove the file if it exists
rm -rf heuristics.bin

# Compile and run the java program
mvn clean compile
mvn exec:java -Dexec.mainClass=&quot;com.gor.poc.FileDownloader&quot;
# [INFO] Scanning for projects...
# [INFO] 
# [INFO] --------------------&amp;lt; com.gor.poc:stream_download &amp;gt;---------------------
# [INFO] Building stream_download 1.0-SNAPSHOT
# [INFO] --------------------------------[ jar ]---------------------------------
# [INFO] 
# [INFO] --- exec-maven-plugin:3.1.0:java (default-cli) @ stream_download ---
# Downloading file in chunks...
# 27 bytes read
# 997 bytes read
# 25 bytes read
# 999 bytes read
# 25 bytes read

# Get the size and md5 of the downloaded file
stat -f &quot;%z&quot; heuristics.bin
md5 -r python/public/stream/data/heuristics.bin | awk &apos;{print $1}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output shows us that the files are downloaded in small chunks like 27 bytes, 997 bytes even though our client is reading a buffer of size 4096.
This is because the server is sending chunks at the rate of 1024 bytes which is being consumed by the Java program much faster.
This is one disadvantage of using chunk encoding where the client has no control over the data transfer.
Even though the client is faster, the download is dependent on the server configurations.&lt;/p&gt;
&lt;p&gt;To solve this let&apos;s configure our Python server to send 1 MB chunks instead of 1 KB. Restart your Python server.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    file_path = file_dir + file_name
-    chunk_size = 1024  # Adjust the chunk size as per your requirements
+    chunk_size = 1024 * 1024
    
    def generate():
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;rm -rf heuristics.bin

# Compile and run the java program
mvn clean compile
mvn exec:java -Dexec.mainClass=&quot;com.gor.poc.FileDownloader&quot;
# Downloading file in chunks...
# 24 bytes read
# 4096 bytes read
# 4096 bytes read
# 4096 bytes read
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the file transfer is much faster but still limited because the client is consuming the data at a slower rate than what the server is sending.
We would have increased the file transfer rate by configuring the client to read 1 MB.&lt;/p&gt;
&lt;p&gt;Let&apos;s configure our client to use the exact bytes our server is sending.&lt;/p&gt;
&lt;p&gt;For this, we&apos;ll send the chunk size used by our server as header metadata.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    response.headers.set(&apos;Content-Disposition&apos;, &apos;attachment&apos;, filename=file_name)
+   response.headers.set(&apos;X-Chunk-Size&apos;, str(chunk_size))  # Add chunk size as a header

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And on the Java side, we can read this header and set our buffer size to match the chunk size.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    // This is the REST API which will serve you the file
    String fileUrl = &quot;http://localhost:5000/download&quot;; 
    String savePath = &quot;&quot;; 

    try {
      // Define the URL 
      URL url = new URL(fileUrl);
      HttpURLConnection connection = (HttpURLConnection) url.openConnection();
      connection.setRequestMethod(&quot;GET&quot;);

      int responseCode = connection.getResponseCode();
      if (responseCode == HttpURLConnection.HTTP_OK) {
        
        // Get the file name from the header
        String fileName = connection.getHeaderField(&quot;Content-Disposition&quot;);
        fileName = fileName.substring(fileName.lastIndexOf(&quot;=&quot;) + 1);
        String filePath = savePath + &quot;/&quot; + fileName;

        // Get the chunk size from the response headers
+       String chunkSizeHeader = connection.getHeaderField(&quot;X-Chunk-Size&quot;);
+       int chunkSize = Integer.parseInt(chunkSizeHeader);

        // Create an inout stream to download the file
        InputStream inputStream = connection.getInputStream();
        FileOutputStream outputStream = new FileOutputStream(&quot;./&quot; + filePath);

        // Read the data in chunks and save it to the file
+       byte[] buffer = new byte[chunkSize];
        int bytesRead;

        // Check if the server supports chunked transfer encoding
        String transferEncoding = connection.getHeaderField(&quot;Transfer-Encoding&quot;);
        boolean isChunked = &quot;chunked&quot;.equalsIgnoreCase(transferEncoding);

+        // Just to distinguish b/w chunked protocol and normal file transfer
+        if (isChunked) {
+          // Read and write the response data in chunks
+          System.out.println(&quot;Downloading file in chunks...&quot;);
+          while ((bytesRead = inputStream.read(buffer)) != -1) {
+            System.out.println(bytesRead + &quot; bytes read&quot;);
+            outputStream.write(buffer, 0, bytesRead);
+          }
+        } else {
+          // Read and write the entire response data
+          System.out.println(&quot;Downloading file as whole&quot;);
+          while ((bytesRead = inputStream.read(buffer)) != -1) {
+            System.out.println(bytesRead + &quot; bytes read&quot;);
+            outputStream.write(buffer, 0, bytesRead);
+          }
+        }

        outputStream.close();
        inputStream.close();

        System.out.println(&quot;File downloaded successfully!&quot;);
      } else {
        System.out.println(&quot;File download failed. Server returned response code: &quot; + responseCode);
      }

      connection.disconnect();
    } catch (IOException e) {
      e.printStackTrace();
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;rm -rf heuristics.bin

# Compile and run the java program
mvn clean compile
mvn exec:java -Dexec.mainClass=&quot;com.gor.poc.FileDownloader&quot;

# Downloading file in chunks...
# 24 bytes read
# 768664 bytes read
# 279888 bytes read
# 24 bytes read
# 391942 bytes read
# 393112 bytes read
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Looks like we are now using the full capability on the client side.&lt;/p&gt;
&lt;h2&gt;Analysis&lt;/h2&gt;
&lt;p&gt;Now the only optimization we will require to send the files quickly is on the server side.
Let&apos;s log the time it takes to download the file while using different chunk sizes on the server.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Chunk size&lt;/th&gt;
&lt;th&gt;Time (in ms)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;512 B&lt;/td&gt;
&lt;td&gt;9337&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1 KB&lt;/td&gt;
&lt;td&gt;5150&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1 MB&lt;/td&gt;
&lt;td&gt;465&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64 MB&lt;/td&gt;
&lt;td&gt;639&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128 MB&lt;/td&gt;
&lt;td&gt;744&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256 MB&lt;/td&gt;
&lt;td&gt;740&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1 GB&lt;/td&gt;
&lt;td&gt;1169&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The time taken to transfer the file first decreases as we increase the chunk size and then start increasing on increasing the chunk size.&lt;/p&gt;
&lt;p&gt;The above analysis was done on Mac M1 - 2020 Model with 16 GB RAM (4 GB available).&lt;/p&gt;
</content:encoded></item><item><title>Package Java JNI libraries in a JAR using Maven</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-05-02-package-java-jni-libraries/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-05-02-package-java-jni-libraries/</guid><description>This article shows how to package JNI libraries in a JAR using Maven.</description><pubDate>Tue, 02 May 2023 11:51:43 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;If you are using a native library in Java, you can just load the &lt;code&gt;dll&lt;/code&gt; or &lt;code&gt;dylib&lt;/code&gt; or &lt;code&gt;so&lt;/code&gt; files using &lt;code&gt;System.loadLibrary&lt;/code&gt; method.
But if you are packaging your Java program as a JAR file, the native library will not be available to the &lt;code&gt;System.loadLibrary&lt;/code&gt; method.
In this article, we&apos;ll see the correct way to package the native library in the JAR file using Maven so that it will be accessible in our Java program without any other user intervention.&lt;/p&gt;
&lt;h2&gt;Project Structure&lt;/h2&gt;
&lt;p&gt;Maven follows a specific project structure where all the source code is placed in &lt;code&gt;src/main/java&lt;/code&gt; and all the resources are placed in &lt;code&gt;src/main/resources&lt;/code&gt;.
The resources directory is added to the classpath when the project is compiled and packaged.
This means you can access the contents of the &lt;code&gt;resources&lt;/code&gt; directory directly in the Java program by specifying the path relative to the resources directory.&lt;/p&gt;
&lt;p&gt;Let&apos;s say you have a native library &lt;code&gt;libhello.dylib&lt;/code&gt; in the &lt;code&gt;resources/lib&lt;/code&gt; directory.
After packaging this will be available at the path &lt;code&gt;lib/libhello.dylib&lt;/code&gt; in the JAR file.&lt;/p&gt;
&lt;h2&gt;Use Native Utils&lt;/h2&gt;
&lt;p&gt;The default way to load a native library in Java is to use the &lt;code&gt;System.loadLibrary&lt;/code&gt; method.
The &lt;code&gt;maven package&lt;/code&gt; command will generate the jar file of the project.
The native library will be packaged in the &lt;code&gt;lib&lt;/code&gt; directory inside the jar file which will not be available to the &lt;code&gt;System.loadLibrary&lt;/code&gt; method.&lt;/p&gt;
&lt;p&gt;Now running our app will require us to extract the library from the jar file and put it somewhere on the system.
This can be done using &lt;a href=&quot;https://github.com/adamheinrich/native-utils&quot;&gt;Native Utils&lt;/a&gt;.
Internally this utility is extracting the native library from the jar file to a temporary directory and loads the library from there.
This will keep our hands out of the dirty work of extracting the library and loading it.
It provides a method &lt;code&gt;loadLibraryFromJar&lt;/code&gt; which can be used to load the native library from the jar file.&lt;/p&gt;
&lt;h2&gt;Configurations&lt;/h2&gt;
&lt;p&gt;As &lt;a href=&quot;https://github.com/adamheinrich/native-utils&quot;&gt;Native Utils&lt;/a&gt; is not available as a release package, we&apos;ll need to add it directly from the GitHub repository.
For this, we can use the &lt;a href=&quot;https://jitpack.io/&quot;&gt;JitPack&lt;/a&gt; service which allows us to add GitHub repositories as Maven dependencies.
You&apos;ll need to add &lt;a href=&quot;https://jitpack.io/&quot;&gt;JitPack&lt;/a&gt; as a repository in your &lt;code&gt;POM.XML&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  &amp;lt;repositories&amp;gt;
    &amp;lt;repository&amp;gt;
      &amp;lt;id&amp;gt;jitpack.io&amp;lt;/id&amp;gt;
      &amp;lt;url&amp;gt;https://jitpack.io&amp;lt;/url&amp;gt;
    &amp;lt;/repository&amp;gt;
  &amp;lt;/repositories&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Also, add the dependency for &lt;a href=&quot;https://github.com/adamheinrich/native-utils&quot;&gt;Native Utils&lt;/a&gt;.
For this, we need to give &lt;code&gt;groupId&lt;/code&gt; as &lt;code&gt;com.github.&amp;lt;user-name&amp;gt;&lt;/code&gt; and &lt;code&gt;artifactId&lt;/code&gt; as &lt;code&gt;&amp;lt;repository-name&amp;gt;&lt;/code&gt;.
Since we are using the latest commit from the repository, we&apos;ll need to specify the commit hash in the version tag.
Below is how the dependency will look like in &lt;code&gt;POM.XML&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  &amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;com.github.adamheinrich&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;native-utils&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;e6a39489662846a77504634b6fafa4995ede3b1d&amp;lt;/version&amp;gt;
  &amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, you can confirm the dependency is added by running the &lt;code&gt;maven dependency:tree&lt;/code&gt; command.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; mvn dependency:tree

[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ javaunsafe ---
[INFO] com.gor.poc:javaunsafe:jar:1.0-SNAPSHOT
[INFO] +- log4j:log4j:jar:1.2.17:compile
[INFO] +- junit:junit:jar:3.8.1:test
[INFO] \- com.github.adamheinrich:native-utils:jar:e6a39489662846a77504634b6fafa4995ede3b1d:compile
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see, my project is using 3 dependencies, log4j, junit, and native-utils with their version as specified.&lt;/p&gt;
&lt;h2&gt;Load the native library&lt;/h2&gt;
&lt;p&gt;In your Java file, you can call the &lt;code&gt;loadLibraryFromJar&lt;/code&gt; method to load the native library.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import cz.adamh.utils.NativeUtils;

public class NativeMemoryLoader {

  static {
    try {
      NativeUtils.loadLibraryFromJar(&quot;/lib/libhello.dylib&quot;);
    } catch (IOException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }
  }

  public static native void sayHello();

  ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Load library based on your OS&lt;/h2&gt;
&lt;p&gt;The problem with loading native libraries like C/C++ is that your compiled file is dependent on the platform.
This means you&apos;ll need to compile the library for each platform and then package it in the JAR file.
So you&apos;ll have different versions of the library for different platforms. And you&apos;ll end up with &lt;code&gt;so&lt;/code&gt;, &lt;code&gt;dylib&lt;/code&gt;, and &lt;code&gt;dll&lt;/code&gt; files in your &lt;code&gt;lib&lt;/code&gt; directory.
Let&apos;s modify our Java code to use the library based on the platform.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  // Get the file extension based on OS
  String osName = System.getProperty(&quot;os.name&quot;).toLowerCase();
  String libExtension = osName.contains(&quot;win&quot;) ? &quot;.dll&quot; :
                        osName.contains(&quot;mac&quot;) ? &quot;.dylib&quot; : &quot;.so&quot;;
  String libPath = &quot;/lib/libhello&quot; + libExtension;
  NativeUtils.loadLibraryFromJar(libPath);
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this article, we saw how to package native libraries in a JAR file using Maven and load them in our Java program.
We also saw how to load the library based on the platform.&lt;/p&gt;
</content:encoded></item><item><title>Configure Google Java Formatter with VSCode</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-04-22-google-java-format-vscode/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-04-22-google-java-format-vscode/</guid><description>This blog is on how to configure Google Java Formatter with VSCode</description><pubDate>Sat, 22 Apr 2023 05:07:34 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Using the same code format is helpful while working on a team project.
This helps you in code readability and also helps you to avoid merge conflicts and makes pull requests easier to focus on.
For this people use different code formatter like Google Java Formatter, Prettier, etc.
In this blog, we will see how to configure Google Java Formatter with VSCode.&lt;/p&gt;
&lt;h2&gt;Download the jar file&lt;/h2&gt;
&lt;p&gt;The first thing you need to do is download the Google Formatter jar file from &lt;a href=&quot;https://github.com/google/google-java-format/releases&quot;&gt;here&lt;/a&gt; and keep it in your home directory.
You can keep it anywhere you want but I prefer to keep it in my home directory.
Make sure you download the jar file with &lt;code&gt;-all-deps&lt;/code&gt; in the name.&lt;/p&gt;
&lt;p&gt;You can also download the jar file using the following command.
The &lt;code&gt;-O&lt;/code&gt; flag is used to specify the location where you want to download the jar file.
The below command will download the jar file and store it in the home directory with the file name &lt;code&gt;google-code-formatter.jar&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; wget https://github.com/google/google-java-format/releases/download/v1.16.0/google-java-format-1.16.0.jar \
  -O ~/google-code-formatter.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configure the VSCode&lt;/h2&gt;
&lt;p&gt;Now we will configure the VSCode to use the Google Java Formatter.
For this, you&apos;ll need an extension called &lt;a href=&quot;https://marketplace.visualstudio.com/items?itemName=SteefH.external-formatters&quot;&gt;External Formatters&lt;/a&gt; extension.
This extension helps you use an external code formatter with VSCode.&lt;/p&gt;
&lt;p&gt;To install the extension, open the VSCode and press &lt;code&gt;Cmd + Shift + X&lt;/code&gt; to open the extension tab.
Search for &lt;code&gt;External Formatters&lt;/code&gt; authored by Stefan van der Haven and install it.&lt;/p&gt;
&lt;p&gt;Now open the VSCode settings by pressing &lt;code&gt;Cmd + ,&lt;/code&gt; and search for &lt;code&gt;External Formatters&lt;/code&gt;.
Then click on the &lt;code&gt;Edit in settings.json&lt;/code&gt; button for modifying the language setting.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;p&gt;Now add the following code in the &lt;code&gt;settings.json&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[
  ...
  &quot;externalFormatters.languages&quot;: {
      &quot;java&quot;: {
          &quot;command&quot;: &quot;java&quot;,
          &quot;arguments&quot;: [
              &quot;-jar&quot;,
              &quot;/Users/shubham.kumar/google-java-format-1.16.0-all-deps.jar&quot;,
              &quot;-&quot;
          ]
      }
  },
  &quot;[java]&quot;: {
      &quot;editor.formatOnSave&quot;: true,
      &quot;editor.defaultFormatter&quot;: &quot;SteefH.external-formatters&quot;,
      &quot;editor.tabSize&quot;: 2
  }
]  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above configuration defined inside &lt;code&gt;externalFormatters.languages&lt;/code&gt; configures the External Formatters extension to use the Google Java Formatter jar file for formatting the Java files.
This configuration tells VScode to run &lt;code&gt;java -jar /Users/shubham.kumar/google-java-format-1.16.0-all-deps.jar &amp;lt;your-file.java&amp;gt;&lt;/code&gt; command for formatting your java files.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;[java]&lt;/code&gt; configuration is language specific setting that tells VSCode to use the External Formatters extension for formatting the Java files and run the formatter whenever a file is saved.
Google Java Code formatter uses 2 spaces for indentation by default.
So we set the &lt;code&gt;editor.tabSize&lt;/code&gt; to &lt;code&gt;2&lt;/code&gt; for Java files.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can also use &lt;code&gt;.vscode/settings.json&lt;/code&gt; for configuring the VSCode settings for a particular project.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog, we saw how to configure Google Java Formatter with VSCode.
Now you can use the same code formatter for all your Java projects and also use it with your team members to avoid merge conflicts and make your pull requests easier to review.&lt;/p&gt;
</content:encoded></item><item><title>Upgrade a simple HTML, CSS and JS site to a WebSocket application</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-01-26-socketio-game/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-01-26-socketio-game/</guid><description>Initially, I thought it was going to be a short one but it ended up with more than 10 mins of reading time. This blog revolves around integrating a WebSocket app using socket.io to an existing HTML, CSS and JS site.</description><pubDate>Thu, 26 Jan 2023 20:53:46 GMT</pubDate><content:encoded>&lt;h2&gt;Intro&lt;/h2&gt;
&lt;p&gt;Sarvesh, one of my friends, created a gaming project called &quot;Pig Game&quot; during his Javascript learning journey.
It is a basic site built using HTML, CSS, and JS in which two people compete by rolling a die.&lt;/p&gt;
&lt;p&gt;At any moment, you have two options: roll the dice to increase your score or give up so that the next person may play.
The catch is that if the dice stop at &lt;strong&gt;1&lt;/strong&gt;, your current score becomes &lt;strong&gt;0&lt;/strong&gt;, and the game is passed to the next player.
But if you willingly pass the game to the next person, your current score gets added to the total score which ultimately saves your hard work.&lt;/p&gt;
&lt;p&gt;For the time being, this game is only a static web page with all of its capabilities running in the browser.
And because of this, it is unplayable if you are not in the same room (or virtually sharing the screen of the host).
We want to make this a server-based game so that we may play together even from afar.&lt;/p&gt;
&lt;p&gt;Here, let&apos;s implement a WebSocket server using express.js &amp;amp; socket.io. Also, we&apos;ll configure the front-end to establish a connection with this server.&lt;/p&gt;
&lt;h2&gt;HTTP vs WebSocket protocols&lt;/h2&gt;
&lt;h3&gt;HTTP&lt;/h3&gt;
&lt;p&gt;Let&apos;s have a bird-eye view of the traditional &lt;code&gt;HTTP&lt;/code&gt; protocol.
With &lt;code&gt;HTTP&lt;/code&gt;, a client sends a request to the server and the server process the request and sends a response in return.
The connection is then closed after this request-response cycle.
&lt;code&gt;HTTP&lt;/code&gt; is stateless and because it runs on TCP, data delivery is ensured.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A stateless protocol does not require the server to retain information or status about each user for the duration of multiple requests. -- wikipedia&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You will be required to use cookies or server-side sessions to not forget the user in an &lt;code&gt;HTTP&lt;/code&gt; protocol.&lt;/p&gt;
&lt;p&gt;&amp;lt;!plantuml/&amp;gt;&lt;/p&gt;
&lt;p&gt;I think that&apos;s all we need to talk about at least during the starting phase.&lt;/p&gt;
&lt;h3&gt;Steps 2 and 3: Onboarding the players&lt;/h3&gt;
&lt;p&gt;The first step is already taken care of by &apos;socket.io&apos;.
Let&apos;s configure our server for steps 2 and 3.
We want to send a &apos;ready&apos; signal when 2 players are connected.
If more than 2 people join, we can reject their connection on the server side using &lt;code&gt;socket.disconnect()&lt;/code&gt; function and send them a &apos;reject&apos; signal.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;socket.emit()&lt;/code&gt; sends the message to a particular socket (client), &lt;code&gt;socket.broadcast.emit()&lt;/code&gt; sends the message to everyone except the socket (client) itself and &lt;code&gt;io.emit()&lt;/code&gt; sends the message to all the clients connected to a socket.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;&lt;code&gt;server.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;...
let players = {};    // This will store the details about the players like their score
let activePlayer = 0;

io.on(&apos;connection&apos;, (socket) =&amp;gt; {   
  players[socket.id] = {    // Initialize the player on establishing a connection
    &quot;socket&quot;: socket,
    &quot;total_score&quot;: 0,
    &quot;current_score&quot;: 0,
    &quot;isActivePlayer&quot;: false,
  } 
  let total_players = Object.keys(players).length
  console.log(`Player with socket id ${socket.id} connected.`)
  console.log(`Total number of players: ${total_players}`)

  // Sending the connection status
  // Reject if we already have 2 players connected
  if (total_players &amp;gt; 2) {
    delete players[socket.id]
    socket.emit(&quot;connection_status&quot;, {&quot;connection_status&quot;: &quot;reject&quot;});
    socket.disconnect();
    console.log(&quot;Already 2 players are onboarded&quot;, Object.keys(players))
  }
  // Send ready signal when we have 2 players 
  else if (total_players == 2) {
    // Tell player 1 to go first
    activePlayer = 0
    io.emit(&quot;connection_status&quot;, {&quot;connection_status&quot;: &quot;ready&quot;, &quot;active_player&quot;: Object.keys(players)[activePlayer]})
  // Else send a waiting signal
  else {
    io.emit(&quot;connection_status&quot;,  {&quot;connection_status&quot;: &quot;waiting&quot;})
  }

  socket.on(&apos;disconnect&apos;, () =&amp;gt; {
      delete players[socket.id];
      console.log(&apos;Client disconnected &apos; + socket.id);
  });
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On the client side, we will consume the messages received to perform certain actions.
This can be done by adding a socket message listener, &lt;code&gt;socket.on()&lt;/code&gt;.
Right now, let&apos;s use alerts for notifying the current player about the game status.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;public/script.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;socket.on(&quot;connection_status&quot;, (args) =&amp;gt; {
    console.log(&quot;Received a connection_status signal&quot;, args.connection_status)
    if (args.connection_status === &apos;waiting&apos;) {
        alert(&quot;Waiting for second player to begin the game&quot;)
    }
    else if (args.connection_status === &apos;ready&apos;) {
        // Enable player is a function to activate and deactivate the roll and hold buttons
        EnablePlayer(args.active_player);
        alert(`Let&apos;s begin the game. ${args.active_player===socket.id ? &quot;It&apos;s your turn&quot; : &quot;Oponent&apos;s turn&quot;}`)
    }
    else if (args.connection_status === &apos;reject&apos;) {
        alert(&quot;Can&apos;t join, already 2 players onboarded&quot;)
    }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 4: Play the game&lt;/h3&gt;
&lt;p&gt;Looks like this is all we wanted in the steps 2 and 3.
For step 4, we will send a signal from the client&apos;s side about his decision.
This can be achieved by the button event listners.
And the server will calculate a random number b/w 1 to 6 and broadcast the response.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;public/script.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;...
// Same for btnHold with decision &quot;hold&quot;
btnRoll.addEventListener(&apos;click&apos;, function(){
    if(playing){
        socket.emit(&quot;decide&quot;, {
            &quot;player_id&quot;: socket.id,
            &quot;decision&quot;: &quot;roll&quot;
         });
    }
});
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On the server, we can decide the score and broadcast it back to clients.
We will send the current and total scores of the player along with the next player&apos;s socket id and dice score.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;server.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;...
 function NotifyScores(active_player, dice) {
    let player1 = Object.keys(players)[0];
    let player2 = Object.keys(players)[1];

    io.emit(&quot;score_update&quot;, {
      &apos;player1_current&apos;: players[player1].current_score,
      &apos;player1_total&apos;: players[player1].total_score,
      &apos;player2_current&apos;: players[player2].current_score,
      &apos;player2_total&apos;: players[player2].total_score,
      &apos;next_turn&apos;: Object.keys(players)[activePlayer],
      &apos;acitive_player_roll&apos;: activePlayer,
      &apos;dice&apos;: dice
    });
  }

  socket.on(&quot;decide&quot;, (args) =&amp;gt; {
    console.log(&quot;decide&quot;, args);
    if (args.player_id === Object.keys(players)[activePlayer]) {
      if (args.decision === &apos;roll&apos;){
        const dice = Math.trunc(Math.random()*6) + 1;
        players[args.player_id].current_score = dice == 1 
          ? 0 
          : players[args.player_id].current_score + dice;
        activePlayer = dice == 1 ? (activePlayer + 1) % 2 : activePlayer
        NotifyScores(activePlayer, dice);
      }
      else if (args.decision === &apos;hold&apos;) {
        players[args.player_id].total_score += players[args.player_id].current_score;
        players[args.player_id].current_score = 0; 
        activePlayer = (activePlayer + 1) % 2;
        NotifyScores(activePlayer, 0);
      }
    }
  });
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 5: Update the scores&lt;/h3&gt;
&lt;p&gt;Step 5 is very simple.
Just update the data received by the client from the server in step 4.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;public/script.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;socket.on(&quot;score_update&quot;, (args) =&amp;gt;{
    console.log(&quot;score update&quot;, args);
    document.getElementById(`current--0`).textContent = args.player1_current;
    document.getElementById(`current--1`).textContent = args.player2_current;
    document.getElementById(`score--0`).textContent = args.player1_total;
    document.getElementById(`score--1`).textContent = args.player2_total;
    diceEl.classList.remove(&apos;hidden&apos;);
    if (args.dice != 0) diceEl.src = `assets/images/dice-${args.dice}.png`;
    EnablePlayer(args.next_turn)
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 6: Decide the winner&lt;/h3&gt;
&lt;p&gt;Step 6, is deciding the winner.
Whoever scores greater than 60 will be entitled the winner.
We have many options to send this information.
This can be sent by the server as a new event, &quot;winner&quot;.
Or we can include this with the &apos;score_update&apos; event.&lt;/p&gt;
&lt;p&gt;Let&apos;s update the &quot;decide&quot; event to send a new message whenever a player wins.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;server.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt; socket.on(&quot;decide&quot;, (args) =&amp;gt; {
    if (args.player_id === Object.keys(players)[activePlayer]) {
      ...
      else if (args.decision === &apos;hold&apos;){
        ...
        players[args.player_id].total_score += players[args.player_id].current_score;
        players[args.player_id].current_score = 0; 

        // Deciding the winner
        if (players[args.player_id].total_score &amp;gt;= 60) {
            io.emit(&quot;winner&quot;, activePlayer);
        }
        else {
            activePlayer = (activePlayer + 1) % 2;
            NotifyScores(activePlayer, 0);
        }
      }
      ...
    }
 });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This can be updated on the client side.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;public/script.js&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;socket.on(&quot;winner&quot;, (winner) =&amp;gt; {
    playing = false;
    document.querySelector(`.player--${winner}`).classList.add(&apos;player--winner&apos;);
    document.querySelector(`.player--${winner}`).classList.remove(&apos;player--active&apos;);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is it, now you can play the game with your friend.
You can always host it online using ngrok.
Now what&apos;s left is adding more features like adding an in-game chat, viewers, a gaming room and much more.
Also, there is a need to clean the code.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog, we witnessed the working of a WebSocket app using Socket.io.
We focused on establishing a connection and sending messages from server to client as well as from client to server.
There are 3 ways in which the messages can be communicated, &lt;code&gt;io.emit()&lt;/code&gt;, &lt;code&gt;socket.emit()&lt;/code&gt; and &lt;code&gt;socket.broadcast.emit()&lt;/code&gt;.
&lt;code&gt;io.emit()&lt;/code&gt; is used for broadcasting the message to all the clients.
&lt;code&gt;socket.broadcast.emit()&lt;/code&gt; broadcasts the message to all the clients except the active client.
&lt;code&gt;socket.emit()&lt;/code&gt; is for server to client or client to server messaging.
For receiving the messages we implement a listener, that triggers on receiving a specific event as defined by us.
We also saw some special events like &lt;code&gt;connection&lt;/code&gt; and &lt;code&gt;disconnect&lt;/code&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Automate your local environment setup using dev containers</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2023-01-08-devcontainers/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2023-01-08-devcontainers/</guid><description>Now you don&apos;t need to waste any time helping a friend to contribute to your new project. Using dev containers, you can automate the process of setting up your local environment in seconds.</description><pubDate>Sun, 08 Jan 2023 19:06:49 GMT</pubDate><content:encoded>&lt;h2&gt;Index&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#problem&quot;&gt;Problem&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Automate the set up process for the projects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#dev-containers&quot;&gt;Dev Containers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;What are dev containers?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#dev-container-custom-configurations&quot;&gt;Dev container custom configurations&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Configuring a dev container as per your needs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#open-your-container-in-vscode&quot;&gt;Open your container in VSCode&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;How to open a dev container in &lt;code&gt;VSCode&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#open-your-container-in-a-browser&quot;&gt;Open your container in a browser&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Opening dev container in &lt;code&gt;github.dev &lt;/code&gt;text editor.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#creating-a-dev-container-using-templates-available&quot;&gt;Creating a dev container using templates available&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Java&lt;/code&gt; and &lt;code&gt;Postgres&lt;/code&gt; (as service) dev container set up.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#adding-fish-terminal-to-the-dev-container&quot;&gt;Adding fish terminal to the dev container&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Using &lt;code&gt;fish&lt;/code&gt; shell and setting the default shell for a container.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#oh-my-zsh-with-powerlevel10k&quot;&gt;oh-my-zsh with powerlevel10k&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Using &lt;code&gt;zsh&lt;/code&gt; along with &lt;code&gt;oh-my-zsh&lt;/code&gt; and &lt;code&gt;powerlevel10k&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#make-the-dev-container-distributable&quot;&gt;Make the dev container distributable&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;You should not add personal changes to source control. Use external script to modify the dev container configurations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Final words&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Problem&lt;/h2&gt;
&lt;p&gt;I use a dual-booted ROG with &lt;strong&gt;Ubuntu 22.10&lt;/strong&gt; and &lt;strong&gt;Windows 10&lt;/strong&gt; for development, along with a &lt;strong&gt;MacBook Air 2020&lt;/strong&gt;.
Every time I begin a new project that makes use of a new tech stack, I am required to take some time to set up the environment on all 3 machines.
And sometimes, my laziness compels me to use only one machine instead of setting up other ones.
The issue is that I don&apos;t want to go through the same setup process again and again.&lt;/p&gt;
&lt;p&gt;The same problem exists on a bigger scale too.
In &lt;a href=&quot;https://www.greyorange.com/&quot;&gt;GreyOrange&lt;/a&gt;, I&apos;ve seen my supervisors spending a lot of time helping freshers set up their new laptops.
And even after following the setup guide on confluence, it took me a lot of time to build my first project successfully.
Imagine helping 300 freshers set up their laptops for a particular project.&lt;/p&gt;
&lt;p&gt;Plus, the building process can be different for different Operating Systems.
Wouldn&apos;t it be great to use a system that your CICD platform uses?&lt;/p&gt;
&lt;p&gt;This problem can be solved using &lt;strong&gt;dev containers&lt;/strong&gt; that only require &lt;code&gt;Docker&lt;/code&gt; and &lt;code&gt;git&lt;/code&gt; installed on your machine.&lt;/p&gt;
&lt;h2&gt;Dev containers&lt;/h2&gt;
&lt;p&gt;Dev containers are just Docker containers that are fully equipped with the necessary tech stacks and tools to begin developing.
Simply establish a connection with your container and begin writing code.
Additionally, a version control program like &lt;code&gt;git&lt;/code&gt; may be used to share this setup.
So, anyone with a computer may run these Docker containers and begin developing them.
GitHub codespace uses dev container configurations to set up their cloud development environment.
You can also customize your dev container using the &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;docker-compose&lt;/code&gt; configurations.&lt;/p&gt;
&lt;h2&gt;Dev container custom configurations&lt;/h2&gt;
&lt;p&gt;A simple dev container configuration has a &lt;code&gt;.devcontainer&lt;/code&gt; directory consisting of &lt;code&gt;devcontainer.json&lt;/code&gt; and a &lt;code&gt;Dockerfile&lt;/code&gt;.
The &lt;code&gt;devcontainer.json&lt;/code&gt; contains all the configurations used by the container while &lt;code&gt;Dockerfile&lt;/code&gt; contains the instruction to create a Docker container.
You can also use an image from the Docker hub directly by using the &lt;code&gt;image&lt;/code&gt; attribute.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;Directory structure&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;.devcontainer
├── Dockerfile
└── devcontainer.json
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;devcontainer.json&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;{
	&quot;name&quot;: &quot;Hello DevContainer&quot;,   // This is the name of the container
    
    // build defines the image configurations
    // Instead of building you can use `image` attribute to provide the docker image hosted on Docker hub
    &quot;build&quot;: {  
        &quot;dockerfile&quot;: &quot;Dockerfile&quot;, // This is the Dockerfile located at &apos;.devcontainer/Dockerfile&apos;
        &quot;context&quot;: &quot;..&quot;,            // This is where the project lies
        &quot;args&quot;: {                   // Anything supplied to Dockerfile as an argument
            &quot;PYTHON_VERSION&quot;: &quot;3.9&quot; 
        }
    },

	// Features to add to the dev container. More info: https://containers.dev/features.
	// &quot;features&quot;: {},

	// Use &apos;forwardPorts&apos; to make a list of ports inside the container available locally.
	// &quot;forwardPorts&quot;: [],

	// Will run these after the container is created
    // Generally used to install the dependencies
	// &quot;postCreateCommand&quot;: &quot;pip install -r requirements.txt&quot;,

    // You can use root as remoteUser but it&apos;s not advisable to do so
	&quot;remoteUser&quot;: &quot;root&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;ARG PYTHON_VERSION
FROM python:${PYTHON_VERSION}.0-slim
# And other setup instructions
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Open your container in VSCode&lt;/h2&gt;
&lt;p&gt;To connect to a dev container using &lt;a href=&quot;https://code.visualstudio.com/&quot;&gt;VSCode&lt;/a&gt;, you need to install &lt;a&gt;Dev Containers&lt;/a&gt; extension.
You can open your current directory in a dev container by clicking &lt;code&gt;F1&lt;/code&gt; and selecting &lt;code&gt;Dev Containers: Reopen in Container&lt;/code&gt;.
You can also choose &lt;code&gt;Dev Containers: Rebuild and Reopen in Container&lt;/code&gt; in case you want to build the Docker image once again.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;p&gt;This will take some time in building the container and installing the tools.
Once completed, you can verify the connection by looking at the left end of the &lt;a href=&quot;https://code.visualstudio.com/&quot;&gt;VSCode&lt;/a&gt; window.
It should display &lt;code&gt;Dev Container: &amp;lt;Container name&amp;gt; @ &amp;lt;Operating system&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The extensions are gone. For your dev container, you&apos;ll need to add your extensions once again.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You can also check the python version to confirm if it&apos;s installed properly.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; root@2a35d5d14816:/workspaces/devcontainers# python --version
Python 3.9.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Open your container in a browser&lt;/h2&gt;
&lt;p&gt;You can also use the power of VSCode from your browser using &lt;a href=&quot;https://github.com/github/dev&quot;&gt;github.dev&lt;/a&gt; editor.
You need to host your project on GitHub for this to work.
Let&apos;s do that.&lt;/p&gt;
&lt;p&gt;First, you will need to add git features in &lt;code&gt;devcontainer.json&lt;/code&gt; file.
This will enable us to use &lt;code&gt;git&lt;/code&gt; and &lt;code&gt;github cli&lt;/code&gt; from inside the container.
You can also enable &lt;code&gt;ssh&lt;/code&gt; using the &lt;code&gt;sshd&lt;/code&gt; feature.
A list of available features can be found at &lt;a href=&quot;https://containers.dev/features&quot;&gt;containers.dev/features&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    ...
    &quot;features&quot;: {
        &quot;ghcr.io/devcontainers/features/git:1&quot;: {},
        &quot;ghcr.io/devcontainers/features/github-cli:1&quot;: {},	
        &quot;ghcr.io/devcontainers/features/sshd:1&quot;: {}
    },
    ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then rebuild the container by pressing &lt;code&gt;F1&lt;/code&gt; -&amp;gt; &lt;code&gt;Rebuild Container&lt;/code&gt;.
Now you can use &lt;code&gt;git&lt;/code&gt; and &lt;code&gt;gh&lt;/code&gt; inside your container.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git init
&amp;gt; git add . 
&amp;gt; git commit -m &quot;Init&quot;
&amp;gt; gh auth login # Here you can create a new ssh key for your container
&amp;gt; gh repo create hello-devcontainer --public --source . --push
✓ Created repository GO-Shubham-Kumar/hello-devcontainer on GitHub
✓ Added remote git@github.com:GO-Shubham-Kumar/hello-devcontainer.git
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 4 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (5/5), 924 bytes | 71.00 KiB/s, done.
Total 5 (delta 0), reused 0 (delta 0)
To github.com:GO-Shubham-Kumar/hello-devcontainer.git
 * [new branch]      HEAD -&amp;gt; master
Branch &apos;master&apos; set up to track remote branch &apos;master&apos; from &apos;origin&apos;.
✓ Pushed commits to git@github.com:GO-Shubham-Kumar/hello-devcontainer.git
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above procedure creates a new repository and pushes the project commits to it.
I can see my project at &lt;a href=&quot;https://github.com/GO-Shubham-Kumar/hello-devcontainer&quot;&gt;https://github.com/GO-Shubham-Kumar/hello-devcontainer&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now we have our code on GitHub.
It&apos;s time to open a devcontainer using gitub.dev.
For this simply visit your GitHub repo and change the URL from &lt;code&gt;github.com&lt;/code&gt; to &lt;code&gt;github.dev&lt;/code&gt;.
This will launch a new VSCode-style window with all your tools available.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: github.dev is a lightweight editor which does not support a terminal. If you want to use the terminal, you should switch to GitHub Codespaces.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Creating a dev container using templates available&lt;/h2&gt;
&lt;p&gt;Above, I showed you how to configure a dev container using a test repository.
Now, I&apos;ll show you how to use available templates to run services like &lt;code&gt;PostgreSQL&lt;/code&gt; along with your container.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://www.greyorange.com/&quot;&gt;GreyOrange&lt;/a&gt;, we have a project that uses &lt;code&gt;SpringBoot&lt;/code&gt; and &lt;code&gt;PostgreSQL&lt;/code&gt;.
We need a list of databases to run the test case for this project.
Let&apos;s try to configure our dev container to have a &lt;code&gt;PostgreSQL&lt;/code&gt; service along with the databases required.&lt;/p&gt;
&lt;p&gt;Summary of things we need.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Java 8 and Maven&lt;/li&gt;
&lt;li&gt;PostgreSQL 9.6&lt;/li&gt;
&lt;li&gt;Populate the databases&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can visit &lt;a href=&quot;https://containers.dev/templates.html&quot;&gt;container templates&lt;/a&gt; to search for a required template.
We will use, &lt;a href=&quot;https://github.com/devcontainers/templates/tree/main/src/java-postgres&quot;&gt;Java &amp;amp; Postgres&lt;/a&gt; template.
This template allows us to configure the Java version, package manager and PostgreSQL version.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In case your template isn&apos;t available, you will need to create your &lt;code&gt;Dockerfile&lt;/code&gt; &amp;amp; &lt;code&gt;docker-compose&lt;/code&gt; files.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;From your &lt;a href=&quot;https://code.visualstudio.com/&quot;&gt;VSCode&lt;/a&gt; editor,
Click &lt;code&gt;F1&lt;/code&gt; -&amp;gt; &lt;code&gt;Dev Containers: Add Dev Container Configuration Files...&lt;/code&gt;.
Click on  &lt;code&gt;Show All Definitions&lt;/code&gt; and select &lt;code&gt;Java &amp;amp; PostgreSQL&lt;/code&gt;.
I&apos;m using version &lt;code&gt;8-bullseye&lt;/code&gt; with Maven so I&apos;ll select them.
After a few moments, &lt;a href=&quot;https://code.visualstudio.com/&quot;&gt;VSCode&lt;/a&gt; will create a &lt;code&gt;.devcontainer&lt;/code&gt; directory with &lt;code&gt;Dockerfile&lt;/code&gt;, &lt;code&gt;docker-compose.yml&lt;/code&gt; &amp;amp; &lt;code&gt;devcontainer.json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Looking at the generated code it seems like we require some modifications for this to work properly.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;We want to initialize the databases as per the &lt;code&gt;database_creator.sql&lt;/code&gt;.
This file contains all the databases we require to run the test cases.
This can be done by adding an entry to the &lt;code&gt;db&lt;/code&gt; -&amp;gt; &lt;code&gt;volume&lt;/code&gt; as shown in the &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Java image should be &lt;code&gt;java:8-bullseye&lt;/code&gt;, not &lt;code&gt;java:0-8-bullseye&lt;/code&gt; which was generated by the generator.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We need &lt;code&gt;Postgres:9.6&lt;/code&gt;, the generated one point to the latest image.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;&lt;code&gt;devcontainer.json&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/java-postgres
{
	&quot;name&quot;: &quot;Java &amp;amp; PostgreSQL&quot;,
	&quot;dockerComposeFile&quot;: &quot;docker-compose.yml&quot;,
	&quot;service&quot;: &quot;app&quot;,
	&quot;workspaceFolder&quot;: &quot;/workspaces/${localWorkspaceFolderBasename}&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;# Changed java:0-8-bullseye to java:8-bullseye
FROM mcr.microsoft.com/devcontainers/java:8-bullseye

ARG INSTALL_MAVEN=&quot;true&quot;
ARG MAVEN_VERSION=&quot;&quot;

ARG INSTALL_GRADLE=&quot;false&quot;
ARG GRADLE_VERSION=&quot;&quot;

RUN if [ &quot;${INSTALL_MAVEN}&quot; = &quot;true&quot; ]; then su vscode -c &quot;umask 0002 &amp;amp;&amp;amp; . /usr/local/sdkman/bin/sdkman-init.sh &amp;amp;&amp;amp; sdk install maven \&quot;${MAVEN_VERSION}\&quot;&quot;; fi \
    &amp;amp;&amp;amp; if [ &quot;${INSTALL_GRADLE}&quot; = &quot;true&quot; ]; then su vscode -c &quot;umask 0002 &amp;amp;&amp;amp; . /usr/local/sdkman/bin/sdkman-init.sh &amp;amp;&amp;amp; sdk install gradle \&quot;${GRADLE_VERSION}\&quot;&quot;; fi

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;version: &apos;3.8&apos;

volumes:
  postgres-data:

services:
  app:
    container_name: javadev
    build: 
      context: .
      dockerfile: Dockerfile
    environment:
      # NOTE: POSTGRES_DB/USER/PASSWORD should match values in db container
        POSTGRES_PASSWORD: postgres
        POSTGRES_USER: postgres
        POSTGRES_DB: postgres
        # We will populate the database using database_creator.sql
        # POSTGRES_HOSTNAME: postgresdb 

    volumes:
      - ../..:/workspaces:cached
    command: sleep infinity

    # Runs app on the same network as the database container, allows &quot;forwardPorts&quot; in devcontainer.json function.
    network_mode: service:db

    # Use &quot;forwardPorts&quot; in **devcontainer.json** to forward an app port locally. 
    # (Adding the &quot;ports&quot; property to this file will not forward from a Codespace.)

  db:
    container_name: postgresdb
    image: postgres:9.6
    restart: always
    volumes:
      # Run the below script as an initialization script 
      - ../misc/database_creator.sql:/docker-entrypoint-initdb.d/database_creator.sql
      # We no longer require this
      # - postgres-data:/var/lib/postgresql/data
    environment:
      # NOTE: POSTGRES_DB/USER/PASSWORD should match values in app container
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
      # We will create our databases using misc/database_creator.sql
      # POSTGRES_DB: postgres

    # Add &quot;forwardPorts&quot;: [&quot;5432&quot;] to **devcontainer.json** to forward PostgreSQL locally.
    # (Adding the &quot;ports&quot; property to this file will not forward from a Codespace.)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above configurations will run &lt;code&gt;PostgreSQL&lt;/code&gt; along with our app.
You can also view this combination in Docker Desktop.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Here I haven&apos;t used &lt;code&gt;git&lt;/code&gt; and &lt;code&gt;gh&lt;/code&gt; features because the image &lt;code&gt;java:8-bullseye&lt;/code&gt; already comes with these tools.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let&apos;s verify the &lt;code&gt;Java&lt;/code&gt; and &lt;code&gt;Maven&lt;/code&gt; versions.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; java -version
openjdk version &quot;1.8.0_352&quot;
OpenJDK Runtime Environment (Temurin)(build 1.8.0_352-b08)
OpenJDK 64-Bit Server VM (Temurin)(build 25.352-b08, mixed mode)
&amp;gt; mvn -version
Apache Maven 3.8.7 (b89d5959fcde851dcb1c8946a785a163f14e1e29)
Maven home: /usr/local/sdkman/candidates/maven/current
Java version: 1.8.0_352, vendor: Temurin, runtime: /usr/local/sdkman/candidates/java/8.0.352-tem/jre
Default locale: en_US, platform encoding: UTF-8
OS name: &quot;linux&quot;, version: &quot;5.15.49-linuxkit&quot;, arch: &quot;aarch64&quot;, family: &quot;unix&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s compile the project.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; mvn clean install
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  35:03 min
[INFO] Finished at: 2023-01-08T00:05:51Z
[INFO] ------------------------------------------------------------------------
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It took a while but the build was successful.&lt;/p&gt;
&lt;h2&gt;Adding fish terminal to the dev container&lt;/h2&gt;
&lt;p&gt;Right now, the dev container is using &lt;code&gt;bash&lt;/code&gt; as it&apos;s the default shell.
There are a lot of shells that come along with this image.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; echo $SHELL
/bin/bash

&amp;gt; cat /etc/shells
/bin/sh
/bin/bash
/bin/rbash
/bin/dash
/bin/zsh
/usr/bin/zsh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s try installing &lt;a href=&quot;https://fishshell.com/&quot;&gt;fish&lt;/a&gt; terminal which is not available by default.
&lt;code&gt;fish&lt;/code&gt; can be installed using the features option in &lt;code&gt;devcontainer.json&lt;/code&gt;.
A list of features is available &lt;a href=&quot;https://containers.dev/features&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Modify the &lt;code&gt;devcontainer.json&lt;/code&gt; as shown below and rebuild the container.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;devcontainer.json&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;{
    ...
    &quot;features&quot;: {
		&quot;ghcr.io/meaningful-ooo/devcontainer-features/fish:1&quot;: {}
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, you can find &lt;code&gt;fish&lt;/code&gt; in the list of available shells. And this feature also sets &lt;code&gt;fish&lt;/code&gt; as your default terminal.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; cat /etc/shells
/bin/sh
/bin/bash
/bin/rbash
/bin/dash
/bin/zsh
/usr/bin/zsh
/usr/bin/fish   # This is it
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;p&gt;If you don&apos;t want to set &lt;code&gt;fish&lt;/code&gt; as your default terminal, you can specify the default terminal in &lt;code&gt;devcontainer.json&lt;/code&gt; file as follows.
This will set &lt;code&gt;bash&lt;/code&gt; as your default container.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;devcontainer.json&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;{
  ...
	&quot;settings&quot;: { 
		&quot;terminal.integrated.defaultProfile.linux&quot;: &quot;bash&quot;, 
		&quot;terminal.integrated.profiles.linux&quot;: { 
			&quot;bash&quot;: { 
				&quot;path&quot;: &quot;bash&quot; 
				} 
			} 
		}
  ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;oh-my-zsh with powerlevel10k&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;fish&lt;/code&gt; is a great shell.
But I want the dev container to feel like my local machine that has &lt;code&gt;oh-my-zsh&lt;/code&gt; with &lt;a href=&quot;https://github.com/romkatv/powerlevel10k&quot;&gt;powerlevel10k&lt;/a&gt;.
At the time of writing this blog, I wasn&apos;t able to find any features to install these things.
So I went with configuring the &lt;code&gt;Dockerfile&lt;/code&gt; itself to include these features.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Disclaimer: This is a very bad idea to directly change a source controlled dev container file as per your preference&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This image already comes with &lt;a href=&quot;https://ohmyz.sh/&quot;&gt;oh-my-zsh&lt;/a&gt;. &lt;a href=&quot;https://github.com/romkatv/powerlevel10k&quot;&gt;Powerlevel10k&lt;/a&gt; is a custom theme that I&apos;ll need to install on my own.
Same goes for &lt;a href=&quot;https://github.com/zsh-users/zsh-syntax-highlighting&quot;&gt;zsh-syntax-highlighting&lt;/a&gt; and &lt;a href=&quot;https://github.com/zsh-users/zsh-autosuggestions&quot;&gt;zsh-autosuggestions&lt;/a&gt; plugins.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;...
# Install powerlevel10k
RUN git clone --depth=1 https://github.com/romkatv/powerlevel10k.git /home/vscode/.oh-my-zsh/custom/themes/powerlevel10k
# Install zsh-autosuggestions
RUN git clone https://github.com/zsh-users/zsh-autosuggestions /home/vscode/.oh-my-zsh/custom/plugins/zsh-autosuggestions
# Install zsh-syntax-highlighting
RUN git clone https://github.com/zsh-users/zsh-syntax-highlighting.git /home/vscode/.oh-my-zsh/custom/plugins/zsh-syntax-highlighting
# This is my local machine&apos;s theme file 
COPY .p10k.zsh /home/vscode
# This is the modified zsh configuration
COPY .zshrc /home/vscode
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;.p10k.zsh&lt;/code&gt; contains the &lt;a href=&quot;https://github.com/romkatv/powerlevel10k&quot;&gt;powerlevel10k&lt;/a&gt; configurations.
&lt;code&gt;.zshrc&lt;/code&gt; contains the &lt;code&gt;zsh&lt;/code&gt; configurations.
These files are located inside the home directory.
I just copied them to &lt;code&gt;.devcontainer&lt;/code&gt; for an easy transfer to the dev container.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; tree -a .devcontainer
.devcontainer
├── .p10k.zsh
├── .zshrc
├── Dockerfile
├── devcontainer.json
└── docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;If you don&apos;t have the &lt;code&gt;.p10k.zsh&lt;/code&gt;, you can skip the COPY step while building the container. Once inside the container, you can run &lt;code&gt;p10k configure&lt;/code&gt; to generate the &lt;code&gt;.p10k.zsh&lt;/code&gt; file. Then the contents of this file can be copied to &lt;code&gt;.devcontainer/.p10k.zsh&lt;/code&gt; for future builds.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Below is my &lt;code&gt;.zshrc&lt;/code&gt; file. You can verify the &lt;code&gt;ZSH_THEME&lt;/code&gt; is set as &lt;code&gt;powerlevel10k/powerlevel10k&lt;/code&gt;.
I am just using 3 plugins. The &lt;code&gt;git&lt;/code&gt; plugin is available by default. &lt;code&gt;zsh-autosuggestions&lt;/code&gt; and &lt;code&gt;zsh-syntax-highlighting&lt;/code&gt; were downloaded from GitHub.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Enable Powerlevel10k instant prompt. Should stay close to the top of ~/.zshrc.
# Initialization code that may require console input (password prompts, [y/n]
# confirmations, etc.) must go above this block; everything else may go below.
if [[ -r &quot;${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh&quot; ]]; then
  source &quot;${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh&quot;
fi

export ZSH=&quot;$HOME/.oh-my-zsh&quot;

ZSH_THEME=&quot;powerlevel10k/powerlevel10k&quot;

plugins=(
	git
	zsh-autosuggestions
	zsh-syntax-highlighting
)

source $ZSH/oh-my-zsh.sh

# To customize prompt, run `p10k configure` or edit ~/.p10k.zsh.
[[ ! -f ~/.p10k.zsh ]] || source ~/.p10k.zsh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If there are multiple developers with their shell preference, modification of &lt;code&gt;Dockerfile&lt;/code&gt; for a specific need is not a very good idea.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Make the dev container distributable&lt;/h2&gt;
&lt;p&gt;You should not change a source controlled &lt;code&gt;Dockerfile&lt;/code&gt; to satisfy your need.
The current &lt;code&gt;Dockerfile&lt;/code&gt; contains the changes I made to suit my preference.
I like &lt;code&gt;zsh&lt;/code&gt; so I did some customization to Docker to support my theming style.
If I commit these changes to the remote, this might become a problem for another developer who prefers some other shell or theme.
Plus, if everyone starts making changes to the dev container configuration, this might result in merge conflicts.&lt;/p&gt;
&lt;p&gt;In scenarios like this, you can use the git &lt;code&gt;skip-worktree&lt;/code&gt; feature.
After cloning the base repo, you don&apos;t want to make any commits on the dev container.
So you can tell git to ignore tracking the &lt;code&gt;.devcontainer&lt;/code&gt; directory.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Tell git to stop tracking these files
&amp;gt; git update-index --assume-unchanged .devcontainer/*

# View the files which are not being tracked
&amp;gt; git ls-files -v | grep &quot;h&quot; | grep dev
h .devcontainer/Dockerfile
h .devcontainer/devcontainer.json
h .devcontainer/docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can freely configure the dev container as per your need.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Actually, this is not the best way to deal with this problem at all, as this will ignore the &lt;code&gt;.devcontainer&lt;/code&gt; directory even in the case of &lt;code&gt;git pull&lt;/code&gt;.
And if you do the reverse using &lt;code&gt;git update-index --no-assume-unchanged .devcontainer/*&lt;/code&gt;, you will need to commit or discard your changes before proceeding.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The best way I can think of to deal with this problem is by keeping the personal installations away from the git repository.&lt;/p&gt;
&lt;p&gt;For this, let&apos;s create an external script that will install &lt;code&gt;powerlevel10k&lt;/code&gt; and other plugins using the &lt;code&gt;docker&lt;/code&gt; commands.&lt;/p&gt;
&lt;p&gt;First, let&apos;s move our &lt;code&gt;.zshrc&lt;/code&gt; and &lt;code&gt;.p10k.zsh&lt;/code&gt; to a new directory out of our repository.
I&apos;m creating a hidden directory in the &lt;code&gt;home&lt;/code&gt; directory for this.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; mkdir ~/.personal-devcontainer 
&amp;gt; mv .devcontainer/.zshrc ~/.personal-devcontainer
&amp;gt; mv .devcontainer/.p10k.zsh ~/.personal-devcontainer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Also, let&apos;s reset the &lt;code&gt;devcontainer.json&lt;/code&gt;, &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;docker-compose.yml&lt;/code&gt; to the unbiased version.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;devcontainer.json&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/java-postgres
{
	&quot;name&quot;: &quot;Java &amp;amp; PostgreSQL&quot;,
	&quot;dockerComposeFile&quot;: &quot;docker-compose.yml&quot;,
	&quot;service&quot;: &quot;app&quot;,
	&quot;workspaceFolder&quot;: &quot;/workspaces/${localWorkspaceFolderBasename}&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;# Changed java:0-8-bullseye to java:8-bullseye
FROM mcr.microsoft.com/devcontainers/java:8-bullseye

ARG INSTALL_MAVEN=&quot;true&quot;
ARG MAVEN_VERSION=&quot;&quot;

ARG INSTALL_GRADLE=&quot;false&quot;
ARG GRADLE_VERSION=&quot;&quot;

RUN if [ &quot;${INSTALL_MAVEN}&quot; = &quot;true&quot; ]; then su vscode -c &quot;umask 0002 &amp;amp;&amp;amp; . /usr/local/sdkman/bin/sdkman-init.sh &amp;amp;&amp;amp; sdk install maven \&quot;${MAVEN_VERSION}\&quot;&quot;; fi \
    &amp;amp;&amp;amp; if [ &quot;${INSTALL_GRADLE}&quot; = &quot;true&quot; ]; then su vscode -c &quot;umask 0002 &amp;amp;&amp;amp; . /usr/local/sdkman/bin/sdkman-init.sh &amp;amp;&amp;amp; sdk install gradle \&quot;${GRADLE_VERSION}\&quot;&quot;; fi

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;version: &apos;3.8&apos;

volumes:
  postgres-data:

services:
  app:
    container_name: javadev
    build: 
      context: .
      dockerfile: Dockerfile
    environment:
      # NOTE: POSTGRES_DB/USER/PASSWORD should match values in db container
        POSTGRES_PASSWORD: postgres
        POSTGRES_USER: postgres
        POSTGRES_DB: postgres
        # We will populate the database using database_creator.sql
        # POSTGRES_HOSTNAME: postgresdb 

    volumes:
      - ../..:/workspaces:cached
    command: sleep infinity

    # Runs app on the same network as the database container, allows &quot;forwardPorts&quot; in devcontainer.json function.
    network_mode: service:db

    # Use &quot;forwardPorts&quot; in **devcontainer.json** to forward an app port locally. 
    # (Adding the &quot;ports&quot; property to this file will not forward from a Codespace.)

  db:
    container_name: postgresdb
    image: postgres:9.6
    restart: always
    volumes:
      # Run the below script as an initialization script 
      - ../misc/database_creator.sql:/docker-entrypoint-initdb.d/database_creator.sql
      # We no longer require this
      # - postgres-data:/var/lib/postgresql/data
    environment:
      # NOTE: POSTGRES_DB/USER/PASSWORD should match values in app container
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
      # We will create our databases using misc/database_creator.sql
      # POSTGRES_DB: postgres

    # Add &quot;forwardPorts&quot;: [&quot;5432&quot;] to **devcontainer.json** to forward PostgreSQL locally.
    # (Adding the &quot;ports&quot; property to this file will not forward from a Codespace.)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, let&apos;s create a bash script inside &lt;code&gt;~/.personal-devcontainer&lt;/code&gt; that will change the container configuration as per our preference (personal changes).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; cd ~/.personal-devcontainer
&amp;gt; touch install.sh
&amp;gt; chmod +x install.sh
&amp;gt; vim install.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;install.sh&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;# This will work on a running dev container
CONTAINER_NAME=$1
docker exec $CONTAINER_NAME git clone --depth=1 https://github.com/romkatv/powerlevel10k.git /home/vscode/.oh-my-zsh/custom/themes/powerlevel10k
docker exec $CONTAINER_NAME git clone https://github.com/zsh-users/zsh-autosuggestions /home/vscode/.oh-my-zsh/custom/plugins/zsh-autosuggestions
docker exec $CONTAINER_NAME git clone https://github.com/zsh-users/zsh-syntax-highlighting.git /home/vscode/.oh-my-zsh/custom/plugins/zsh-syntax-highlighting
docker cp ~/.personal-devcontainer/.p10k.zsh $CONTAINER_NAME:/home/vscode/.p10k.zsh
docker cp ~/.personal-devcontainer/.zshrc $CONTAINER_NAME:/home/vscode/.zshrc
docker exec $CONTAINER_NAME chsh -s $(which zsh)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can just run &lt;code&gt;./install.sh javadev&lt;/code&gt; to install the preferences to the javadev container (which is our development container name).
As I&apos;ll keep changing the script depending on my projects, I&apos;ll publish the installation scripts repo to my &lt;a href=&quot;https://github.com/GO-Shubham-Kumar/personal-devcontainer.git&quot;&gt;git repo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are 2 problems with this method -&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You will require to run the script on every rebuild.&lt;/li&gt;
&lt;li&gt;You cannot directly use this script online with &lt;code&gt;github.dev&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For easier access, let&apos;s create an alias for the script. It will take a container name as a parameter and will configure the shell as required.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;.zshrc&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;...
# configure-personal-dev-container javadev
function configure-personal-dev-container {
	if [[ -z $1 ]]; then 
		echo Please provide a container name
		exit 1
	else
		~/.personal-devcontainer/install.sh $1
	fi
}
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, I can run &lt;code&gt;configure-personal-dev-container javadev&lt;/code&gt; from anywhere.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Dev container helps us to reduce the setup time for a project.
We started with creating a simple dev container configuration.
We used the power of dev containers to open the project in &lt;code&gt;VSCode&lt;/code&gt; and &lt;code&gt;github.dev&lt;/code&gt;.
Then we created a complex dev container configuration consisting of &lt;code&gt;Java&lt;/code&gt; &amp;amp; &lt;code&gt;Maven&lt;/code&gt; with &lt;code&gt;PostgreSQL&lt;/code&gt; installation and database creation for an ongoing project at &lt;a href=&quot;https://www.greyorange.com/&quot;&gt;GreyOrange&lt;/a&gt;.
We tried to compile our project inside the dev container which worked as expected.
Then we moved to configure the &lt;code&gt;shell&lt;/code&gt; inside the dev container.
We tried &lt;code&gt;fish&lt;/code&gt; and &lt;code&gt;zsh&lt;/code&gt; with &lt;code&gt;powerlevel10k&lt;/code&gt; which is what I use locally.
We also learned to configure the dev container for personal development without disturbing the source-controlled setup configurations.&lt;/p&gt;
</content:encoded></item><item><title>A note on creating dynamic link previews for your website or blog</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2022-12-23-info-link-previews/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2022-12-23-info-link-previews/</guid><description>This is a little blog post on how to dynamically add a preview of your website while posting on social media</description><pubDate>Fri, 23 Dec 2022 07:53:55 GMT</pubDate><content:encoded>&lt;p&gt;import BlogImageWithContext from &quot;../../components/BlogImageWithContext.astro&quot;;
import BlogImage from &quot;../../components/BlogImage.astro&quot;;
import LinkPreviewDemo from &quot;../../images/posts/2022/info-link-previews/link-preview-demo.png&quot;;
import OpenGrapgDebugging from &quot;../../images/posts/2022/info-link-previews/opengraph-debugging.png&quot;;
import OpenGrapgDebuggingPreview from &quot;../../images/posts/2022/info-link-previews/opengraph-debugging-preview.png&quot;;
import FlaskAppPreviewImage from &quot;../../images/posts/2022/info-link-previews/flask-app-preview-image.png&quot;;
import SlackPreview from &quot;../../images/posts/2022/info-link-previews/link-preview-on-slack.png&quot;;
import FinalPreview from &quot;../../images/posts/2022/info-link-previews/final-preview.png&quot;;&lt;/p&gt;
&lt;h2&gt;Index&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#general-intoduction&quot;&gt;General Introduction&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Understanding link previews&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#defining-meta-tags&quot;&gt;Defining meta tags&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Open-graph and Twitter mea tags&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#viewing-and-debuggig&quot;&gt;Viewing and Debuggig&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Using opengraph.xyz to view your previews&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#make-it-dynamic&quot;&gt;Make it dynamic&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Create a custom server to generate images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#hosting-the-server-on-heroku&quot;&gt;Hosting the server on Heroku&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Deploy the server on Heroku and test its working&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Final words&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;General Introduction&lt;/h2&gt;
&lt;p&gt;Using social networking apps, you can add a preview to the URL you share.
As a result, the reader can get a decent notion of the contents.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image-context/&amp;gt;&lt;/p&gt;
&lt;h2&gt;Hosting the server on &lt;a href=&quot;https://dashboard.heroku.com/&quot;&gt;Heroku&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The Flask &lt;code&gt;dev&lt;/code&gt; server won&apos;t work with &lt;a href=&quot;https://dashboard.heroku.com/&quot;&gt;Heroku&lt;/a&gt; deployments.
Let&apos;s use &lt;a href=&quot;https://gunicorn.org/&quot;&gt;Gunicorn&lt;/a&gt; for our deployments.&lt;/p&gt;
&lt;p&gt;Create a new file with the name &lt;code&gt;Procfile&lt;/code&gt; with no extensions and modify its contents as below.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;Procfile&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;web: gunicorn gettingstarted.wsgi
web: gunicorn app:app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first line tells &lt;a href=&quot;https://dashboard.heroku.com/&quot;&gt;Heroku&lt;/a&gt; to use &lt;a href=&quot;https://gunicorn.org/&quot;&gt;Gunicorn&lt;/a&gt; as server.
The second line tells&lt;a href=&quot;https://gunicorn.org/&quot;&gt;Gunicorn&lt;/a&gt; to run the app.
&lt;a href=&quot;https://gunicorn.org/&quot;&gt;Gunicorn&lt;/a&gt; uses the syntax &lt;code&gt;gunicorn filename:app_name&lt;/code&gt; to start a server.&lt;/p&gt;
&lt;p&gt;Also, update the &lt;code&gt;requirements.txt&lt;/code&gt; file by adding &lt;code&gt;gunicorn&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;requirements.txt&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;Flask
pillow
requests
gunicorn
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can use &lt;a href=&quot;https://devcenter.heroku.com/articles/heroku-cli&quot;&gt;Heroku CLI&lt;/a&gt; to host the app following the below steps.&lt;/p&gt;
&lt;p&gt;You can install the Heroku CLI using the instructions &lt;a href=&quot;https://devcenter.heroku.com/articles/heroku-cli&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# First you need git (Heroku uses git)
&amp;gt; git init
&amp;gt; git add .
&amp;gt; git commit -m &quot;My first link preview&quot;

# Login to your account
&amp;gt; heroku Login

# This will create an app and add a remote to your git
&amp;gt; heroku create -a &amp;lt;some-unique-name&amp;gt;

&amp;gt; git remote -v
heroku https://git.heroku.com/&amp;lt;some-unique-name&amp;gt;.git

# To deploy just push your code to Heroku remote
&amp;gt; git push heroku main 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s check it on &lt;a href=&quot;https://opengraph.xyz&quot;&gt;opengraph.xyz&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;p&gt;This is exactly what we wanted.
Now, we can create multiple HTML files with different titles and our server will generate a preview accordingly.&lt;/p&gt;
&lt;p&gt;This was a small demo of generating link previews.
You can further customize your server to generate previews based on title, description, publish date and much more.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;A dynamic link preview can be generated by hosting a server and pointing our og/twitter image to it.
Here we used Flask to create a GET route that provides us with the image.
You can add more flexibility/features by using &lt;code&gt;html2img&lt;/code&gt; module for generating images from HTML directly.
We also saw the process of hosting a Gunicorn server on Heroku.
This proved to be more useful in testing our deployment on &lt;a href=&quot;https://opengraph.xyz&quot;&gt;opengraph.xyz&lt;/a&gt; as reverse proxy had some problems.&lt;/p&gt;
</content:encoded></item><item><title>Combining multiple git repositories into a single repository and retaining all the commit histories</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2022-12-14-monorepo-migration/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2022-12-14-monorepo-migration/</guid><description>Initially, you thought these projects required a separate version control system, but you were wrong. It seems like they are all interdependent. Let&apos;s see how you can merge them together without losing anything. </description><pubDate>Tue, 13 Dec 2022 18:30:00 GMT</pubDate><content:encoded>&lt;h2&gt;Contents&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topics&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#the-problem&quot;&gt;The Problem&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Merge multiple git repositories into one while preserving all histories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#a-bit-about-git&quot;&gt;A bit about git&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Introduction to Blob, Tree and Commits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#rewriting-history&quot;&gt;Rewriting history&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Uusing filter-branch to rewrite history. Moving a file inside a directory while retaining git histories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#merging-multiple-repos-into-one&quot;&gt;Merging multiple repos into one&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;A step by step guide for combining two git repositories into a larger parent repo while preserving histories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#merging-using-shopsys-monorepo-tools&quot;&gt;Merging using Shopsys monorepo-tools&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Merging two git repos into a larger parent repo using monorepo tools by shopsys&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;#conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Final words&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;The Problem&lt;/h2&gt;
&lt;p&gt;In our company, we had multiple maven projects that were dependent on each other.
And for every project, we maintained a separate &lt;code&gt;git&lt;/code&gt; repository hosted on Bitbucket.&lt;/p&gt;
&lt;p&gt;Due to a certain use-case, we wanted to migrate all our Bitbucket repos to GitHub.
And to avoid any circular dependency issues, we also wanted to combine all these related repositories into a mono-repo system.&lt;/p&gt;
&lt;p&gt;The idea is to have a parent git repository that will contain all these repositories as a subdirectory.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Things described here will work for any project using git version control, irrespective of the hosting platform.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;&lt;code&gt;Folder structure&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;parent
 ├── POM.XML        # Our new parent POM
 ├── .git           # This is our new git that will contain all the histories
 ├── repo1         
 │   ├── POM.XML
 │   ├── ...
 │   └── .git       # This git will be removed
 ├── repo2
 │   ├── POM.XML
 │   ├── ...
 │   └── .git       # This git will be removed
 └── repo3
     ├── POM.XML
     ├── ...
     └── .git       # This git will be removed
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Combining the repos is the easier part.
As we were using Maven, all we required was to create a master &lt;code&gt;POM.XML&lt;/code&gt; that would contain all the sub-repos as submodules.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Again, the steps defined here are not dependent on Maven or any other package manager.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The difficult part was to retain all the histories, which I&apos;ll explain in this blog.
Because each repository has it&apos;s own &lt;code&gt;.git&lt;/code&gt;, removing this will delete all the associated histories.
After which we won&apos;t be able to identify the code changes, commits, and most importantly, the branches.&lt;/p&gt;
&lt;p&gt;At &lt;a href=&quot;https://www.greyorange.com&quot;&gt;GreyOrange&lt;/a&gt;, we maintain a &lt;code&gt;develop&lt;/code&gt; branch and certain &lt;code&gt;release&lt;/code&gt; branches for each repo.
We were required to merge all the respective commits on develop and release branches as well.
We wanted to merge their histories so that we could have all our commits on the same branch as they were originally.
We also had many feature branches, but they were mostly independent, so merging wasn&apos;t required for them.&lt;/p&gt;
&lt;h2&gt;A bit about git&lt;/h2&gt;
&lt;p&gt;When you run &lt;code&gt;git init&lt;/code&gt;, it creates a new &lt;code&gt;.git&lt;/code&gt; directory. This directory contains everything &lt;code&gt;git&lt;/code&gt; needs to do it&apos;s magic.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;objects&lt;/code&gt; directory&lt;/h3&gt;
&lt;p&gt;The blobs, trees, and commits are all stored in the &lt;code&gt;objects&lt;/code&gt; directory within &lt;code&gt;.git&lt;/code&gt;. These are the 3 basic elements that define all the functionality of &lt;code&gt;git&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Blobs are a type of data structure that contains compressed information related to the changed files.
Each time you run &lt;code&gt;git add . &lt;/code&gt;, a new blob is created inside the &lt;code&gt;objects&lt;/code&gt; directory.
You can view a blob&apos;s contents using &lt;code&gt;git cat-file&lt;/code&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Blobs are just the changed files hashed using &lt;code&gt;SHA-1&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; ls .git/objects
info/     pack/

# Create a file with it&apos;s content as test
&amp;gt; echo &quot;test&quot; &amp;gt;&amp;gt; test.txt
&amp;gt; ls .
    test.txt 

# adding changes creates blob of the changed files
&amp;gt; git add .
&amp;gt; ls .git/objects
0c/   info/    pack/      # 0c is the directory containing the changes

&amp;gt; ls .git/objects/0c
2c5f41c83de09587dfe46d5a5382eddf5bb77f
# The complete hash of the blob is 0c2c5f41c83de09587dfe46d5a5382eddf5bb77f
# Note: The first 2 letters are the directory name

# You can also view the contents of the blob using cat-file utility
&amp;gt; git cat-file blob 0c2c5f41c83de09587dfe46d5a5382eddf5bb77f
test
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Multiple blobs are combined together to form a tree data structure.
Generally, trees are created during a commit.
But you can run &lt;code&gt;git write-tree&lt;/code&gt; to generate a tree of recently added blobs.
These trees are also stored inside the &lt;code&gt;objects&lt;/code&gt; directory.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Trees are like the directories containing the blobs and other trees.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git write-tree    # This outputs the hash associated with the tree generated
56bac5fc9c69776a5c67daa2225ef9b2e1edd4f6

# Trees are stored in the same manner
&amp;gt; ls .git/objects/56
bac5fc9c69776a5c67daa2225ef9b2e1edd4f6

# You can also view the content of a tree file
# It contains a reference to the blobs or trees
&amp;gt; git cat-file -p 56bac5fc9c69776a5c67daa2225ef9b2e1edd4f6
100644 blob 0c2c5f41c83de09587dfe46d5a5382eddf5bb77f    test.txt
# 0c2c5f... is the hash of the blob created above
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So a tree represents the state of the system.
Each commit is just 1 tree which is hashed and stored along with other information like author &amp;amp; date.
They are also stored in the same manner.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git commit -m &quot;initial commit&quot;

&amp;gt; ls .git/objects
0c/   52/   56/   info/    pack/    # 52 is newly generated directory

&amp;gt; git cat-file -p 5203c0048b4795669114fcdb261dc5bb4e77a54f
tree 56bac5fc9c69776a5c67daa2225ef9b2e1edd4f6       # This is the hash of the tree we created above
author Shubham Kumar &amp;lt;unresolved.shubham@gmail.com&amp;gt; 1670872587 +0530
committer Shubham Kumar &amp;lt;unresolved.shubham@gmail.com&amp;gt; 1670872587 +0530
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A commit just contains the recent tree hash and inforamtion on author, time and committer.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It&apos;s not possible to distinguish a blob, a tree and a commit just by looking at the objects directory.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;&lt;code&gt;refs&lt;/code&gt; directory&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;refs&lt;/code&gt; directory contains the reference to commits.
There are 2 types of reference - &lt;code&gt;branch&lt;/code&gt; and &lt;code&gt;tags&lt;/code&gt;.
Tags identify a unique commit while branch points to the latest child along the commit tree.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; ls .git/refs/heads
master               # There is just one branch right now

# Let&apos;s create a new branch and see what happens
&amp;gt; git checkout -b dev
Switched to a new branch &apos;dev&apos;

&amp;gt; ls .git/ref/heads
dev   master        # Now you can see both the branches

&amp;gt; cat .git/ref/heads/dev
5203c0048b4795669114fcdb261dc5bb4e77a54f 
# Points to the latest commit. This is the exact same hash of the commit we created above. 

&amp;gt; git commit -m &quot;commit to dev&quot; --allow-empty
[dev 9d6d972] commit to dev

# Creating a new commit changes the contents of the active branch.
&amp;gt; cat .git/ref/heads/dev
9d6d972066b774e89343e57f2eb053559bf3f22c 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;&lt;code&gt;logs&lt;/code&gt; directory&lt;/h3&gt;
&lt;p&gt;This contains the history of every branch.
Everytime you change the branch using &lt;code&gt;git checkout &amp;lt;branch-name&amp;gt;&lt;/code&gt; or update the tip, &lt;code&gt;logs/HEAD&lt;/code&gt; is updated.
&lt;code&gt;logs/refs/heads&lt;/code&gt; contains the history of commits for a particular branch.
This is a safety net. You can easily retrieve your work even after a rebase.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# To view the logs/HEAD
&amp;gt; git reflog
9d6d972 (HEAD -&amp;gt; dev) HEAD@{0}: checkout: moving from master to dev
5203c00 (master) HEAD@{1}: checkout: moving from dev to master
9d6d972 (HEAD -&amp;gt; dev) HEAD@{2}: commit: commit to dev
5203c00 (master) HEAD@{3}: checkout: moving from master to dev
5203c00 (master) HEAD@{4}: commit (initial): initial commit
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Rewriting History&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://git-scm.com/docs/git-filter-branch&quot;&gt;&lt;code&gt;filter-branch&lt;/code&gt;&lt;/a&gt; can be used to rewrite the entire history of a git repository.
This will create a new blob, tree and commit for everything once again.
Rewriting history using the &lt;code&gt;filter-branch&lt;/code&gt; does not hamper the commit data.
Hashes will change but the entire inforamtion about the author, committer, etc remains unchanged.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# For all the commits do this -&amp;gt; mkdir nested; mv test.txt nested/test.txt
&amp;gt; git filter-branch --tree-filter &apos;mkdir nested; mv test.txt nested/test.txt&apos; HEAD

&amp;gt; tree .
.
├── nested
    └── test.txt

&amp;gt; git cat-file -p d1950a39cbb6b1212e47d0c5be3b19e023051671  # This I got by looking inside the `objects` dir
tree 3ea8798ffe7147c04d66c9bef3ac5109ad6e80b8   # New tree reference
author Shubham Kumar &amp;lt;unresolved.shubham@gmail.com&amp;gt; 1670872587 +0530     # Commit time remained unchanged
committer Shubham Kumar &amp;lt;unresolved.shubham@gmail.com&amp;gt; 1670872587 +0530  

initial commit

# Let&apos;s see what&apos;s inside this tree
# We moved our &apos;test.txt&apos; file inside the &apos;nested&apos; directory. This creates a new tree. 
&amp;gt; git cat-file -p 3ea8798ffe7147c04d66c9bef3ac5109ad6e80b8      
040000 tree 2b297e643c551e76cfa1f93810c50811382f9117    nested  

# Inside the &apos;nested&apos; directory is our file.
&amp;gt; git cat-file -p 2b297e643c551e76cfa1f93810c50811382f9117
100644 blob 9daeafb9864cf43055ae93beb0afd6c7d144bfa4    test.txt

# And inside the file is the content. 
&amp;gt; git cat-file blob 9daeafb9864cf43055ae93beb0afd6c7d144bfa4
test
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Merging multiple repos into one&lt;/h2&gt;
&lt;p&gt;Merging multiple repositories requires us to take the objects and other entities from one repo and copy it to some other repo.
For the sake of explaining, I just created 2 test repos - &lt;a href=&quot;https://github.com/unresolvedcold/test-repo-1&quot;&gt;test-repo-1&lt;/a&gt; and &lt;a href=&quot;https://github.com/unresolvedcold/test-repo-2&quot;&gt;test-repo-2&lt;/a&gt;.
Both contains a single &apos;README.MD&apos; file with some content. We&apos;ll try merging them into a monorepo system with a parent git and these repos as subdirectories.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; cat test-repo-1/README.md
# test-repo-1

This is a test repository.

&amp;gt; git log --reflog  # To view all the commits
commit f23153640c79ef57849be2baac14f6daa7b96a1c (HEAD -&amp;gt; main, origin/main, origin/HEAD)
Author: Shubham Kumar &amp;lt;35415266+UnresolvedCold@users.noreply.github.com&amp;gt;
Date:   Tue Dec 13 11:02:33 2022 +0530

    Updated README.MD

commit 527ee4e25e8a145448bf799412c369c6cbc8e934
Author: Shubham Kumar &amp;lt;35415266+UnresolvedCold@users.noreply.github.com&amp;gt;
Date:   Tue Dec 13 11:01:57 2022 +0530

    Initial commit
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; cat test-repo-2/README.md
# test-repo-2

This is some other test repo.

&amp;gt; git log --reflog  # To view all the commits
commit 69ed70f51232f81a29a537fb6534e91d1d1ac9c2 (HEAD -&amp;gt; main, origin/main, origin/HEAD)
Author: Shubham Kumar &amp;lt;35415266+UnresolvedCold@users.noreply.github.com&amp;gt;
Date:   Tue Dec 13 11:04:00 2022 +0530

    Updated README.MD &amp;lt;repo-2&amp;gt;

commit fe71fde928e17f4adbba0c637a6afc4c503f455a
Author: Shubham Kumar &amp;lt;35415266+UnresolvedCold@users.noreply.github.com&amp;gt;
Date:   Tue Dec 13 11:03:12 2022 +0530

    Initial commit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have 2 git repos with 2 commits on &lt;code&gt;main&lt;/code&gt; branch. Let&apos;s try merging them.&lt;/p&gt;
&lt;h4&gt;&lt;code&gt;Current folder structure&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;.
├── test-repo-1
│   ├── .git        # This will be removed
│   └── README.md
└── test-repo-2
    ├── .git        # This will be removed
    └── README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;&lt;code&gt;New folder structure&lt;/code&gt;&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;parent
├── .git            # New git that will contain all our histories
├── test-repo-1
│   └── README.md
└── test-repo-2
    └── README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We will create a new git repository.
And include the remotes of the repository we want to merge.
We want to fetch everything from these repos. This is done by calling &lt;code&gt;git fetch --all&lt;/code&gt;.
Then, one by one, we can create a new subdirectory for each repository.
And at last, we can manipulate the git history into thinking the codes were present in the subdirectory from the beginning.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git init  # Create a new repository

# Fetch the repos we want to merge 
&amp;gt; git remote add repo1 git@github.com:UnresolvedCold/test-repo-1.git
&amp;gt; git remote add repo2 git@github.com:UnresolvedCold/test-repo-2.git
&amp;gt; git fetch --all
Fetching repo1
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (6/6), 1.23 KiB | 180.00 KiB/s, done.
From github.com:UnresolvedCold/test-repo-1
 * [new branch]      main       -&amp;gt; repo1/main
Fetching repo2
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (6/6), 1.24 KiB | 424.00 KiB/s, done.
From github.com:UnresolvedCold/test-repo-2
 * [new branch]      main       -&amp;gt; repo2/main
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we have everything updated. We can start our merging process.
Checkout the branch of the first repository and modify the index file such that repo1 files are inside the &lt;code&gt;repo1&lt;/code&gt; directory.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Let&apos;s start with the first repo main branch
&amp;gt; git checkout --detach repo1/main

&amp;gt; git ls-files -s   # This is how the file is presently 
100644 142ee887e3184237ac17f918498bda405eeb5fc1 0	README.md

# We want to move README.md to repo1/README.md
&amp;gt; git ls-files -s | sed &quot;s-\t\&quot;*-&amp;amp;repo1/-&quot;      # We want to do this for all the commits
100644 142ee887e3184237ac17f918498bda405eeb5fc1 0	repo1/README.md

# After doing this for all the commits we will update the index file
&amp;gt; git filter-branch --index-filter &apos;
     git ls-files -s | sed &quot;s-\t\&quot;*-&amp;amp;repo1/-&quot; | 
     GIT_INDEX_FILE=$GIT_INDEX_FILE.new git update-index --index-info &amp;amp;&amp;amp;  mv &quot;$GIT_INDEX_FILE.new&quot; &quot;$GIT_INDEX_FILE&quot;&apos; 

&amp;gt; ls -a 
./    ../    .git/    repo1/   # A new directory called repo1 is created

&amp;gt; ls repo1/
README.md

# We can see our the first repo contents are inside repo1 directory
&amp;gt; cat README.md
# test-repo-1

This is a test repository.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above looks good. At least we can see the files related to repo1 in &lt;code&gt;repo1&lt;/code&gt; subdirectory.
Let&apos;s explore the history. It should be a tree called &lt;code&gt;repo1&lt;/code&gt; with &lt;code&gt;README.md&lt;/code&gt; inside it.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Let&apos;s see the git commit list
&amp;gt; git log --pretty=format:&quot;%H&quot;
495e27b84df8e4e6d297b37a03f1c6c0a5fdcc73    # This is the latest commit
8a5f4f3805be9604b1f96b03df5cf1092363ba91

# Let&apos;s view the histories for the latest commit
&amp;gt; git cat-file -p 495e27b84df8e4e6d297b37a03f1c6c0a5fdcc73
tree 38eccdd305b8e410a1993391031e948e664d7b1d
parent 8a5f4f3805be9604b1f96b03df5cf1092363ba91
author Shubham Kumar &amp;lt;35415266+UnresolvedCold@users.noreply.github.com&amp;gt; 1670909553 +0530
committer GitHub &amp;lt;noreply@github.com&amp;gt; 1670909553 +0530

Updated README.MD%
&amp;gt; git cat-file -p 38eccdd305b8e410a1993391031e948e664d7b1d
040000 tree d83102ebb26f925b8e087c821e49ca910f1750a6	repo1   # Has a directory called repo1
&amp;gt; git cat-file -p d83102ebb26f925b8e087c821e49ca910f1750a6
100644 blob 142ee887e3184237ac17f918498bda405eeb5fc1	README.md   # repo1 has README.MD

# Let&apos;s quickly check the second commit id
&amp;gt; git cat-file -p 8a5f4f3805be9604b1f96b03df5cf1092363ba91 | grep tree
tree 0dc5c3d1b78ed4dcd97120149441a8e3a8d6aefa
&amp;gt; git cat-file -p 0dc5c3d1b78ed4dcd97120149441a8e3a8d6aefa
040000 tree cc92c678191b4d3c4731f5d0f9b212532316ef8c	repo1     # Check
&amp;gt; git cat-file -p cc92c678191b4d3c4731f5d0f9b212532316ef8c
100644 blob 6a642ba8d3e31ba9a02606da98dd4a73a2d554e2	README.md # Check
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can see that both the files and commit histories were successfully migrated to repo1.
Now we can do the same process for repo2.
If you have more than one branch, you&apos;ll need to repeat the process for each one.&lt;/p&gt;
&lt;p&gt;But first, we must delete the original references in order to create a new one.
And we also want to merge the previous references with the new ones.
So let&apos;s save the current HEAD and delete the references.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Our current HEAD (latest commit in our case)
&amp;gt; git rev-parse HEAD
495e27b84df8e4e6d297b37a03f1c6c0a5fdcc73

# Save the current HEAD for merging later on
&amp;gt; REPO1=$(git rev-parse HEAD)

# Delete the original references
&amp;gt; git for-each-ref --format=&quot;%(refname)&quot; refs/original/
refs/original/HEAD
&amp;gt; git update-ref -d refs/original/HEAD
&amp;gt; git reset --hard
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We are now ready to merge &lt;code&gt;repo2&lt;/code&gt;. Again we&apos;ll need to reset the HEAD before merging.
We&apos;ll create a new branch called &lt;code&gt;main&lt;/code&gt; and merge both repos to it one by one.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git checkout --detach repo2/main
&amp;gt; git filter-branch --index-filter &apos;
     git ls-files -s | sed &quot;s-\t\&quot;*-&amp;amp;repo2/-&quot; | 
     GIT_INDEX_FILE=$GIT_INDEX_FILE.new git update-index --index-info &amp;amp;&amp;amp;  mv &quot;$GIT_INDEX_FILE.new&quot; &quot;$GIT_INDEX_FILE&quot;&apos; 

# Save the reference
&amp;gt; REPO2=$(git rev-parse HEAD)

# Reset HEAD (again before merging)
&amp;gt; git update-ref -d refs/original/HEAD
&amp;gt; git reset --hard
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we can begin our merging process.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; git checkout -b main
&amp;gt; git merge --no-commit -q $REPO1 $REPO2 --allow-unrelated-histories
# If no merge conflicts then you can just commit it else you&apos;ll need to resolve merge conflicts
&amp;gt; git commit -m &quot;Migrated&quot;

&amp;gt; tree
.
├── repo1
│   └── README.md
└── repo2
    └── README.md

&amp;gt; git log --oneline
d0b5806 (HEAD -&amp;gt; main) Migrated
074ef5b Updated README.MD &amp;lt;repo-2&amp;gt;
025ec51 Initial commit
495e27b Updated README.MD
8a5f4f3 Initial commit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is the fundamental method for merging two repositories into one while retaining all commit histories.
Now, of course, we never want to manually redo this for all the repos and all the branches.
A script would be helpful.&lt;/p&gt;
&lt;h2&gt;Merging using Shopsys monorepo-tools&lt;/h2&gt;
&lt;p&gt;And as they say,&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Don&apos;t reinvent the wheel.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There are numerous such merger tools available online, we used &lt;a href=&quot;https://github.com/shopsys/monorepo-tools&quot;&gt;monorepo-tools by shopsys&lt;/a&gt;.
There were several tweaks we did for our use case. One was that by default this tool only merges the &lt;code&gt;master&lt;/code&gt; branch.
And here we are trying to merge the &lt;code&gt;main&lt;/code&gt; branch so, we&apos;ll need to update the &lt;code&gt;shopsys&lt;/code&gt; repo to do this.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Under the hood, monorepo-tools is using the same commands described above.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code&gt;# Replace master by main in monorepo files
# Run this inside the shopsys monorepo-tools directory
&amp;gt; for f in $(find *.sh); do c=$(cat $f | sed &apos;s/master/main/g&apos;); echo $c&amp;gt;$f ;done

# sed &apos;s/master/main/g&apos; $f &amp;gt; $f should work on Linux based system 
# but on MAC it just deletes all the contents of my file.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s see how you can merge 2 repos using shopsys monorepo-tools now.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# First clone the shopsys repo
&amp;gt; git clone git@github.com:shopsys/monorepo-tools.git

# Create a new repo
&amp;gt; mkdir parent
&amp;gt; cd parent
&amp;gt; git init

# Add the remotes and fetch
&amp;gt; git remote add repo1 git@github.com:UnresolvedCold/test-repo-1.git
&amp;gt; git remote add repo2 git@github.com:UnresolvedCold/test-repo-2.git
&amp;gt; git fetch --all

# Run Shopsys mono-repo tool
&amp;gt; ../monorepo-tools/monorepo_build.sh repo1 repo2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is how easy it is to merge multiple repos using shopsys monorepo-tools.
There are a variety of other things that you try with this tool like splitting a repo into multiple repos (reverse of what we did here).
Not exploring this here.&lt;/p&gt;
&lt;p&gt;Let&apos;s verify if everything worked !!!&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt; tree .
.
├── repo1
│   └── README.md
└── repo2
    └── README.md   # Both the repositories are under their sub-directories

&amp;gt; git log --oneline
ad3f765 (HEAD -&amp;gt; main) merge multiple repositories into a monorepo
074ef5b Updated README.MD &amp;lt;repo-2&amp;gt;
025ec51 Initial commit
495e27b Updated README.MD
8a5f4f3 Initial commit

# Explore the latest commit
&amp;gt; git cat-file -p ad3f765 | grep tree
tree 630e5e1c57649efcd9929baa790768927783659e
&amp;gt; git cat-file -p 630e5e1c57649efcd9929baa790768927783659e
040000 tree d83102ebb26f925b8e087c821e49ca910f1750a6	repo1   # Check
040000 tree 7801ea011e82b38c3f3de145571ad75536d5bd5c	repo2   # Check
&amp;gt; git cat-file -p d83102ebb26f925b8e087c821e49ca910f1750a6
100644 blob 142ee887e3184237ac17f918498bda405eeb5fc1	README.md   # Check
&amp;gt; git cat-file blob 142ee887e3184237ac17f918498bda405eeb5fc1
# test-repo-1

This is a test repository.
&amp;gt; git cat-file -p 7801ea011e82b38c3f3de145571ad75536d5bd5c
100644 blob 18c9019fe2e7d5e8151db4cb5f1d10307c8547ec	README.md   # Check
This is a test repository.
&amp;gt; git cat-file blob 18c9019fe2e7d5e8151db4cb5f1d10307c8547ec
# test-repo-2

This is some other test repo.

# We can also verify commits 074ef5b and 495e27b to see if README.md is inside their respective folders
&amp;gt; git cat-file -p 074ef5b | grep tree
tree 057f76982f62d051ed841e43c09536d3f3c61980
&amp;gt; git cat-file -p 057f76982f62d051ed841e43c09536d3f3c61980
040000 tree 7801ea011e82b38c3f3de145571ad75536d5bd5c	repo2     # Check
&amp;gt; git cat-file -p 7801ea011e82b38c3f3de145571ad75536d5bd5c
100644 blob 18c9019fe2e7d5e8151db4cb5f1d10307c8547ec	README.md # Check

&amp;gt; git cat-file -p 495e27b | grep tree
tree 38eccdd305b8e410a1993391031e948e664d7b1d
&amp;gt; git cat-file -p 38eccdd305b8e410a1993391031e948e664d7b1d
040000 tree d83102ebb26f925b8e087c821e49ca910f1750a6	repo1     # Check
&amp;gt; git cat-file -p d83102ebb26f925b8e087c821e49ca910f1750a6
100644 blob 142ee887e3184237ac17f918498bda405eeb5fc1	README.md # Check
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Everything seems perfect.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The merging of multiple repositories into a monorepo structure can be accomplished by rewriting &lt;code&gt;git&lt;/code&gt; histories.
This can be done manually by fetching a repo and tweaking the git index to believe it belongs to a subdirectory rather than the main repo itself.
Another way is by using open-source tools like &lt;a href=&quot;https://github.com/shopsys/monorepo-tools&quot;&gt;shopsys monorepo tools&lt;/a&gt; with which merging can be done in a few steps.
More tweaks can be done with &lt;code&gt;monorepo-tools&lt;/code&gt; to merge multiple branches and create a higher level &lt;code&gt;POM&lt;/code&gt; inside every commit.&lt;/p&gt;
</content:encoded></item><item><title>A simple blogging site for personal use</title><link>https://blog-schwiftycold.firebaseapp.com/blog/2022-12-05-create-your-own-custom-blog/</link><guid isPermaLink="true">https://blog-schwiftycold.firebaseapp.com/blog/2022-12-05-create-your-own-custom-blog/</guid><description>it&apos;s an adventure to control your contents and hosting platforms</description><pubDate>Sun, 04 Dec 2022 18:30:00 GMT</pubDate><content:encoded>&lt;p&gt;This is how I quickly hosted the platform where this blog exists that you are reading right now.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There isn&apos;t a way to post comments as of now. You&apos;ll need to ping me on social platforms.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;All I required was a text editor with dev container support (like &lt;strong&gt;VS Code&lt;/strong&gt;), &lt;strong&gt;Docker&lt;/strong&gt; and &lt;strong&gt;Github account&lt;/strong&gt;.
For deployment there are many static site hosting platforms. I am going with &lt;strong&gt;&lt;a href=&quot;https://firebase.google.com/&quot;&gt;firebase hosting&lt;/a&gt;&lt;/strong&gt; for deploying my site.
As per the advice of one of my close friends, &lt;a href=&quot;https://mobile.twitter.com/guruxkancharla&quot;&gt;Guruvardhan&lt;/a&gt; this is built using Astro (MDX pages for blogging).&lt;/p&gt;
&lt;h2&gt;Understanding the flow&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;There are many templates available in astro library. The one I&apos;m using is &lt;a href=&quot;https://github.com/lancerossdev/astro-basic-blog&quot;&gt;Lance Ross astro basic template&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Folder structure&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;src
├── components
├── images
├── layouts
│   ├── BlogLayout.astro
│   └── Layout.astro
└── pages
    ├── 404.astro
    ├── index.astro
    ├── posts
    │   ├── but-why-tho.mdx
    │   ├── how-to-deploy.mdx
    │   └── markdown-styling.mdx
    ├── posts.astro
    └── rss.xml.ts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;src/pages/posts&lt;/code&gt; is where the blog posts live. All these &lt;code&gt;mdx&lt;/code&gt; files are different blogs.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;src/pages/*.astro&lt;/code&gt; contains the main view components of the website. The layouts are defined at &lt;code&gt;src/layouts&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;rss.xml.ts&lt;/code&gt; is the configuration for my blog&apos;s RSS feed. You can subscribe using Feedly or any other newsreader app.&lt;/p&gt;
&lt;h3&gt;Publishing the blog&lt;/h3&gt;
&lt;p&gt;The defaut branch for my blog repository is called &lt;code&gt;publish&lt;/code&gt;.
Any pushes to this will automatically trigger the rebuilding and publishing of the website.&lt;/p&gt;
&lt;p&gt;So I create a new branch for each of my new blogs and when I&apos;m finished, I just merge the changes to &lt;code&gt;publish&lt;/code&gt; branch.
This triggers the &lt;code&gt;deploy-firebase&lt;/code&gt; workflow which pushes the build to firebase hosting.
And the updated blog is live within seconds.&lt;/p&gt;
&lt;h2&gt;How to launch your own instance&lt;/h2&gt;
&lt;p&gt;You can always fork my repository &lt;a href=&quot;https://github.com/UnresolvedCold/blog&quot;&gt;here&lt;/a&gt;.
You may need to activate GitHub Actions in your repository.
Better way is to use &lt;a href=&quot;https://github.com/lancerossdev/astro-basic-blog&quot;&gt;Lance Ross astro basic template&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&amp;lt;!image/&amp;gt;&lt;/p&gt;
&lt;p&gt;Click on &lt;code&gt;Use this template&lt;/code&gt; -&amp;gt; &lt;code&gt;Create a new repository&lt;/code&gt;. This will help you to create a new repository using the template.
You can customize your repository as you like.&lt;/p&gt;
&lt;p&gt;For firabse hosting, you&apos;ll need to create a new app from their website. Update the &lt;code&gt;.firebasesrc&lt;/code&gt; with the name of your app.
You&apos;ll also need &lt;code&gt;firebase-cli&lt;/code&gt; to generate the auth token. Simply run &lt;code&gt;firebase login:ci&lt;/code&gt; from your terminal to get this token.
Save this token as GitHub Action secret as key value with key as &apos;FIREBASE_TOKEN&apos; and value as the token you generated.&lt;/p&gt;
&lt;h3&gt;Get a custom domain and link it to firebase hosting&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Create an account on &lt;a href=&quot;porkbun.com&quot;&gt;Porkbun&lt;/a&gt; and buy a domain of your liking.&lt;/li&gt;
&lt;li&gt;Go to firebase and click on hosting. There you can find &lt;code&gt;Add custom domain&lt;/code&gt; button.
&amp;lt;!image/&amp;gt;&lt;/li&gt;
&lt;li&gt;Clicking this will ask you to provide your domain (Example: shubham.codes). Don&apos;t mention &apos;https&apos; or &apos;www&apos;.&lt;/li&gt;
&lt;li&gt;In the next steps, it will ask you to update &lt;code&gt;A&lt;/code&gt; and &lt;code&gt;TXT&lt;/code&gt; records with your domain provider which is &lt;a href=&quot;porkbun.com&quot;&gt;porkbun&lt;/a&gt; in our case.
&lt;ol&gt;
&lt;li&gt;Go to porkbun and click on the &lt;code&gt;Details&lt;/code&gt; button on your domain name.&lt;/li&gt;
&lt;li&gt;Click on DNS record and update the &lt;code&gt;A&lt;/code&gt; and &lt;code&gt;TXT&lt;/code&gt;.
&amp;lt;!image/&amp;gt;&lt;/li&gt;
&lt;li&gt;It takes some time to get activated. It&apos;s mentioned 24 hours but for me it took around 2 hours.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
</content:encoded></item></channel></rss>