How to learn vim properly

Vim is the editor of my choice, I love it a lot. I try to find vim bindings everywhere I can, A few apps which have good vim bindings

  1. Chrome with vimium
  2. The terminal with a proper ~/.inputrc. My ~/.inputrc below

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    # ~/.inputrc
    #vim key bindings
    set editing-mode vi
    set keymap vi
    # do not bell on tab-completion
    set bell-style bell
    set expand-tilde off
    set input-meta off
    set convert-meta on
    set output-meta off
    set horizontal-scroll-mode off
    set history-preserve-point on
    set mark-directories on
    set mark-symlinked-directories on
    set match-hidden-files off
    # completion settings
    set page-completions off
    set completion-query-items 2000
    set completion-ignore-case off
    set show-all-if-ambiguous on
    set show-all-if-unmodified on
    set completion-prefix-display-length 10
    set print-completions-horizontally off
    C-n: history-search-forward
    C-p: history-search-backward
    #new stuff
    "\C-a": history-search-forward
  3. Once you set this up, many repls will respect these bindings. For instance irb, pry respect these. As a matter of fact any good terminal app which use the readline library will respect this.

  4. Tmux is another software that has vim bindings

So, whenever I work with someone people always seem to be impressed that vim can do so much so simply. This is really the power of vim, vim was built for text editing and it is the best for this job. However, learning it can be quite painful and many people will abandon learning it in a few days.

There is a very popular learning curve graph about vim

Editor learning curves

Source

The part about vim is partially true, in that once it clicks everything falls into place.

Notepad is an editor which is very easy to use, but if you compare it to programming languages it has the capability of a calculator. You put your cursor in a place type stuff and that is all. Vim lets you speak to it, in an intelligent way Anyway, I am rambling at this point.

The reason I am writing this blog post in the middle of the night is because many people ask me “How should I setup vim?”, I’d love to have it look/work like yours. And many times I point them to my vimrc. However, if you are planning on learning vim, don’t go there. Start with the following ~/.vimrc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
set nocompatible
" plugins
call plug#begin('~/.vim/plugged')
Plug 'tpope/vim-sensible'
Plug 'kien/ctrlp.vim'
Plug 'matchit.zip'
runtime macros/matchit.vim
call plug#end()
" Ctrlp.vim
let g:ctrlp_map = '<c-p>'
let g:ctrlp_cmd = 'CtrlP'
let g:ctrlp_working_path_mode = 'ra'
let g:ctrlp_custom_ignore = {
\ 'dir': '\v[\/]\.(git|hg|svn)$',
\ 'file': '\v\.(exe|so|dll)$',
\ 'link': 'some_bad_symbolic_links',
\ }

That is all, no more no less.

To finish the installation, you need to do 2 things:

  1. Run curl -fLo ~/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
  2. Run vim +PlugInstall from your terminal

A few simple tips on how to learn vim properly:

  1. Finish vimtutor on your terminal 3 to 4 times. Read everything 3 to 4 times and actually practice it.
  2. Learn about vim movements, commands and modes
  3. Open your vim editor at the root of the project and have just one instance open, don’t open more than one instance per project. This is very very important. I can’t stress this enough. To open another file from your project, hit Ctrl+P
  4. Start with a simple vimrc, The one I pasted above is a good start.
  5. Learn about buffers / windows and tabs in vim and how to navigate them.
  6. Add 1 extension that you think might help every month. And put a few sticky notes with its shortcuts/mappings on your monitor.
  7. Use http://vimawesome.com/ to find useful plugins.

Most important of all: Don’t use any plugin other than sensible and CtrlP for the first month

Once you learn to speak the language of vim, using other editors will make you feel dumb.

A simpler way to generate an incrementing version for elixir apps

Mix has the notion of versions built into it. If you open up a mix file you’ll see a line like below:

1
2
3
4
5
6
7
8
# mix.exs
defmodule Webmonitor.Mixfile do
use Mix.Project
def project do
[app: :webmonitor,
version: "0.1.0",
# ...

If you are using Git, there is a simple way to automatically generate a meaningful semantic version. All you need to do is:

  1. Tag a commit with a version tag, like below:
1
git tag --annotate v1.0 --message 'First production version, Yay!'
  1. Put a helper function which can use this info with git describe to generate a version
1
2
3
4
5
6
7
8
9
10
defp app_version do
# get git version
{description, 0} = System.cmd("git", ~w[describe]) # => returns something like: v1.0-231-g1c7ef8b
_git_version = String.strip(description)
|> String.split("-")
|> Enum.take(2)
|> Enum.join(".")
|> String.replace_leading("v", "")
end
  1. Use the return value from this function as the version
1
2
3
4
5
6
7
8
# mix.exs
defmodule Webmonitor.Mixfile do
use Mix.Project
def project do
[app: :webmonitor,
version: app_version(),
# ...

The way this works is simple. From the man pages of git-describe

NAME git-describe - Describe a commit using the most recent tag reachable from it

DESCRIPTION The command finds the most recent tag that is reachable from a commit. If the tag points to the commit, then only the tag is shown. Otherwise, it suffixes the tag name with the number of additional commits on top of the tagged object and the abbreviated object name of the most recent commit.

So, if you have a tag v1.0 like above, and if you have 10 commits on top of it, git-describe will print v1.0-100-g1c7ef8b where v1.0 is the latest git tag reachable from the current commit, 100 is the number of commits since then and g1c7ef8b is the short commit hash of the current commit. We can easily transform this to 1.0.100 using the above helper function. Now, you have a nice way of automatically managing versions. The patch version is bumped whenever a commit is made, the major and minor version can be changed by creating a new tag, e.g. v1.2

This is very useful when you are using distillery for building your releases

Case insensitive key retrieval from maps in Elixir

I ran into an issue with inconsistent naming of keys in one of my provider’s json. This is really bad data quality, the data that is being sent should have consistent key names. Either uppper, lower, or capitalized, but consistent. Anyway, this provider was sending data will all kinds of mixed case keys.

Here is some elixir code that I wrote to get keys using a case insensitive match. There is an issue on the Poison decoder project which should render this useless, however till that is fixed you can use the code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
defmodule CaseInsensitiveGetIn do
def ci_get_in(nil, _), do: nil
def ci_get_in({_k, val}, []), do: val
def ci_get_in({_k, val}, key), do: ci_get_in val, key
def ci_get_in(map, [key|rest]) do
current_level_map = Enum.find(map, &key_lookup(&1, key))
ci_get_in current_level_map, rest
end
def key_lookup({k, _v}, key) when is_binary(k) do
String.downcase(k) == String.downcase(key)
end
end
ExUnit.start
defmodule CaseInsensitiveGetInTest do
use ExUnit.Case
import CaseInsensitiveGetIn
test "gets an exact key" do
assert ci_get_in(%{"name" => "Mujju"}, ~w(name)) == "Mujju"
end
test "gets capitalized key in map" do
assert ci_get_in(%{"Name" => "Mujju"}, ~w(name)) == "Mujju"
end
test "gets capitalized input key in map" do
assert ci_get_in(%{"Name" => "Mujju"}, ~w(Name)) == "Mujju"
end
test "gets mixed input key in map" do
assert ci_get_in(%{"NaME" => "Mujju"}, ~w(nAme)) == "Mujju"
end
test "gets an exact deep key" do
assert ci_get_in(%{"name" => "Mujju", "sister" => %{"name" => "Zainu"}}, ~w(sister name)) == "Zainu"
end
test "gets an mixed case deep map key" do
assert ci_get_in(%{"name" => "Mujju", "sisTER" => %{"naME" => "Zainu"}}, ~w(sister name)) == "Zainu"
end
test "gets an mixed case deep key" do
assert ci_get_in(%{"name" => "Mujju", "sisTER" => %{"naME" => "Zainu"}}, ~w(sIStER NAme)) == "Zainu"
end
test "gets a very deep key" do
map = %{
"aB" => %{
"BC" => 7,
"c" => %{"DD" => :foo, "Cassandra" => :awesome, "MOO" => %{"name" => "Mujju"}}
}}
assert ci_get_in(map, ~w(ab bc)) == 7
assert ci_get_in(map, ~w(ab c dd)) == :foo
assert ci_get_in(map, ~w(ab c moo name)) == "Mujju"
assert ci_get_in(map, ~w(ab Bc)) == 7
assert ci_get_in(map, ~w(ab C dD)) == :foo
assert ci_get_in(map, ~w(ab C mOo nAMe)) == "Mujju"
end
end

Script to analyze the structure of an xml document

While working with XML data, you often don’t find the WSDL files and may end up manually working through the document to understand its structure. At my current project I ran into a few hundred XML files and had to analyze them to understand the data available. Here is a script I created which prints all the possible nodes in the input files

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env ruby
# Author: Khaja Minhajuddin <minhajuddin.k@gmail.com>
require 'nokogiri'
class XmlAnalyze
def initialize(filepaths)
@filepaths = filepaths
@node_paths = {}
end
def analyze
@filepaths.each { |filepath| analyze_file(filepath) }
@node_paths.keys.sort
end
private
def analyze_file(filepath)
@doc = File.open(filepath) { |f| Nokogiri::XML(f) }
analyze_node(@doc.children.first)
end
def analyze_node(node)
return if node.is_a? Nokogiri::XML::Text
add_path node.path
node.attributes.keys.each do |attr|
add_path("#{node.path}:#{attr}")
end
node.children.each do |child|
analyze_node(child)
end
end
def add_path(path)
path = path.gsub(/\[\d+\]/, '')
@node_paths[path] = true
end
end
if ARGV.empty?
puts 'Usage: ./analyze_xml.rb file1.xml file2.xml ....'
exit(-1)
end
puts XmlAnalyze.new(ARGV).analyze

It outputs the following for the xml below

1
2
3
4
5
6
7
8
9
10
11
<?xml version="1.0" encoding="UTF-8"?>
<root>
<person>
<name type="full">Khaja</name>
<age>31</age>
</person>
<person>
<name type="full">Khaja</name>
<dob>Jan</dob>
</person>
</root>
1
2
3
4
5
6
/root
/root/person
/root/person/age
/root/person/dob
/root/person/name
/root/person/name:type

Hope you find it useful!

Bash completion script for mix

Bash completion is very handy for cli tools. You can set it up very easily for mix using the following script.

1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash
# `sudo vim /etc/bash_completion.d/mix.sh` and put this inside of it
# mix bash completion script
complete_mix_command() {
[ -f mix.exs ] || exit 0
mix help --search "$2"| cut -f1 -d'#' | cut -f2 -d' '
return $?
}
complete -C complete_mix_command -o default mix

How to show your blog content in your Rails application

I recently released LiveForm which is a service which gives you form endpoints (I’d love to have you check it out :) ) I wanted to show my blog’s content on the home page, It is pretty straightforward with the rich ruby ecosystem.

  1. First you need a way to get the data from your blog. The LiveForm blog has an atom feed at http://blog.liveformhq.com/atom.xml . I initially used RestClient to get the data from the feed.
  2. Once we have the feed, we need to parse it to extract the right content. Some quick googling led me to the awesome feedjira gem, (I am not gonna comment about the awesome name:))
  3. feedjira actually has a simple method to parse the feed from a URL Feedjira::Feed.fetch_and_parse(url)
  4. Once I got the entries, I just had to format them properly. However, there was an issue with summaries of blog posts having malformed html. This was due to naively slicing the blog post content at 200 characters by hexo (the blog engine I use), Nokogiri has a simple way of working around this. However, I went one step further and removed all html markup from the summary so that it doesn’t mess with the web application’s markup: Nokogiri::HTML(entry.summary).css("body").text
  5. Finally, I didn’t want to fetch and parse my feed for every user that visited my website. So, I used fragment caching to render the feed once every day.

Here is all the relevant code:

The class that fetches and parses the feed

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class LiveformBlog
URL = "http://blog.liveformhq.com/atom.xml"
def entries
Rails.logger.info "Fetching feed...................."
feed = Feedjira::Feed.fetch_and_parse(URL)
feed.entries.take(5).map {|x| parse_entry(x)}
end
private
def parse_entry(entry)
OpenStruct.new(
title: entry.title,
summary: fix_summary(entry),
url: entry.url,
published: entry.published,
)
end
def fix_summary(entry)
doc = Nokogiri::HTML(entry.summary)
doc.css('body').text
end
end

The view that caches and renders the feed

1
2
3
4
5
6
7
8
9
10
11
12
<%= cache Date.today.to_s do %>
<div class='blog-posts'>
<h2 class='section-heading'>From our Blog</h2>
<% LiveformBlog.new.entries.each do |entry| %>
<div class=blog-post>
<h4><a href='<%= entry.url %>'><%= entry.title %></a></h4>
<p class='blog-post__published'><%= short_time entry.published %></p>
<div><%= entry.summary %>...</div>
</div>
<% end %>
</div>
<% end %>

Screenshot of the current page

Liveform blog

How to deploy a simple phoenix app on a single server using distillery

If you find issues or can improve this guide, please create a pull request at:

2. Setup the server

We’ll be running our server under the user called slugex. So, we first need to create that user.

1
2
3
4
5
6
7
8
9
## commands to be executed on our server
APP_USER=slugex
# create prent dir for our home
sudo mkdir -p /opt/www
# create the user
sudo useradd --home "/opt/www/$APP_USER" --create-home --shell /bin/bash $APP_USER
# create the postgresql role for our user
sudo -u postgres createuser --echo --no-createrole --no-superuser --createdb $APP_USER

3. Install the git-deploy rubygem on our local computer

We’ll be using the git-deploy rubygem to do deploys. This allows deploys similar to Heroku. You just need to push to your production git repository to start a deployment.

1
2
3
4
## commands to be executed on our local computer
# install the gem
# you need ruby installed on your computer for this
gem install git-deploy

4. Setup distillery in our phoenix app (on local computer)

We’ll be using distillery to manage our releases.

Add the distillery dependency to our mix.exs

1
2
3
defp deps do
[{:distillery, "~> 0.10"}]
end

Init the distillery config

1
2
3
4
# get dependencies
mix deps.get
# init distillery
mix release.init

Change rel/config.ex to look like below

1
2
3
4
5
6
7
...
environment :prod do
set include_erts: false
set include_src: false
# cookie info ...
end
...

5. Setup git deploy (local computer)

Let us setup the remote and the deploy hooks

1
2
3
4
5
6
7
8
9
10
11
## commands to be executed on our local computer
# setup the git remote pointing to our prod server
git remote add prod slugex@slugex.com:/opt/www/slugex
# init
git deploy setup -r "prod"
# create the deploy files
git deploy init
# push to production
git push prod master

TODO: release this as a book

6. Setup postgresql access

1
2
3
4
5
6
7
8
9
10
## commands to be executed on the server as the slugex user
# create the database
createdb slugex_prod
# set the password for the slugex user
psql slugex_prod
> slugex_prod=> \password slugex
> Enter new password: enter the password
> Enter it again: repeat the password

6. Setup the prod.secret.exs

Copy the config/prod.secret.exs file from your local computer to /opt/www/slugex/config/prod.secret.exs

1
2
## on local computer from our phoenix app directory
scp config/prod.secret.exs slugex@slugex.com:config/

create a new secret on your local computer using mix phoenix.gen.secret and paste it in the server’s config/prod.secret.exs secret

It should look something like below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# on the server
# /opt/www/slugex/config/prod.secret.exs
use Mix.Config
config :simple, Simple.Endpoint,
secret_key_base: "RgeM4Dt8kl3yyf47K1DXWr8mgANzOL9TNOOiCknZM8LLDeSdS1ia5Vc2HkmKhy68"
http: [port: 4010],
server: true, # <=== this is very important
root: "/opt/www/slugex",
url: [host: "slugex.com", port: 443],
cache_static_manifest: "priv/static/manifest.json"
# Do not print debug messages in production
config :logger, level: :info
# Configure your database
config :simple, Simple.Repo,
adapter: Ecto.Adapters.Postgres,
username: "slugex",
password: "another brick in the wall",
database: "slugex_prod",
pool_size: 20

6. Tweak the deploy scripts

7. One time setup on the server

1
2
3
## commands to be executed on server as slugex
MIX_ENV=prod mix do compile, ecto.create
MIX_ENV=prod ./deploy/after_push

Logger

Exception notifications

Setup systemd

6. One time setup on server (on server as slugex user)

1
2
3
4
5
6
7
8
9
10
11
12
## commands to be run on the server as the slugex user
cd /opt/www/slugex
# create the secrets config
echo 'use Mix.Config' > config/prod.secrets.exs
# add your configuration to this file
# update hex
export MIX_ENV=prod
mix local.hex --force
mix deps.get
mix ecto.create

6. Nginx configuration

7. Letsencrypt setup and configuration

9. TODO: Configuration using conform

10. TODO: database backups to S3

10. TODO: uptime monitoring of websites using uptime monitor

10. TODO: email via SES

10. TODO: db seeds

10. TODO: nginx caching basics, static assets large expirations

10. TODO: remote console for debugging

sudo letsencrypt certonly –webroot -w /opt/www/webmonitor/public/ -d webmonitorhq.com –webroot -w /opt/www/webmonitor/public/ -d www.webmonitorhq.com

11. Check SSL certificate: https://www.sslshopper.com/ssl-checker.html

Common mistakes/errors

  1. SSH errors

Improvements

  1. Automate all of these using a hex package?
  2. Remove dependencies on git-deploy if possible
  3. Hot upgrades

How to extract bits from a binary in elixir

Erlang and by extension Elixir have powerful pattern matching constructs, which allow you to easily extract bits from a binary. Here is an example which takes a binary and returns their bits

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
defmodule Bits do
# this is the public api which allows you to pass any binary representation
def extract(str) when is_binary(str) do
extract(str, [])
end
# this function does the heavy lifting by matching the input binary to
# a single bit and sends the rest of the bits recursively back to itself
defp extract(<<b :: size(1), bits :: bitstring>>, acc) when is_bitstring(bits) do
extract(bits, [b | acc])
end
# this is the terminal condition when we don't have anything more to extract
defp extract(<<>>, acc), do: acc |> Enum.reverse
end
IO.inspect Bits.extract("!!") # => [0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1]
IO.inspect Bits.extract(<< 99 >>) #=> [0, 1, 1, 0, 0, 0, 1, 1]

The code is pretty self explanatory

Elixir process timeout pitfall

If you taken a look at Elixir, you may have come across something like the below code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
defmodule HardWorker do
def work(id) do
Process.sleep(id * 900)
{:done, id}
end
end
defmodule Runner do
@total_timeout 1000
def run do
{us, _} = :timer.tc &work/0
IO.puts "ELAPSED_TIME: #{us/1000}"
end
def work do
tasks = Enum.map 1..10, fn id ->
Task.async(HardWorker, :work, [id])
end
Enum.map(tasks, fn task ->
Task.await(task, @total_timeout)
end)
end
end
Runner.run

Looks simple enough, we loop over and create 10 processes and then wait for them to finish. It also prints out a message ELAPSED_TIME: _ at the end where _ is the time taken for it to run all the processes.

Can you take a guess how long this runner will take in the worst case?

If you guessed 10 seconds, you are right! I didn’t guess 10 seconds when I first saw this kind of code. I expected it to exit after 1 second. However, the key here is that Task.await is called on 10 tasks and if the 10 tasks finish at the end of 1s, 2s, … 10s This code will run just fine.

This is a completely made up example but it should show you that running in parallel with timeouts is not just a Task.await away.

I have coded an example app with proper timeout handling and parallel processing at https://github.com/minhajuddin/parallel_elixir_workers Check it out.

Addendum

I posted this on the elixirforum and got some feedback about it.

1
2
3
4
5
6
7
8
9
tasks = Enum.map 1..10, fn id ->
Task.async(HardWorker, :work, [id])
end
# at this point all tasks are running in parallel
Enum.map(tasks, fn task ->
Task.await(task, @total_timeout)
end)

Let us take another look at the relevant code. Now, let us say that this is spawning processes P1 to P10 in that order. Let’s say tasks T1 to T10 are created for these processes. Now all these tasks are running concurrently.

Now, in the second Enum.map, in the first iteration the task is set to T1, so T1 has to finish before 1 second otherwise this code will timeout. However, while T1 is running T2..T10 are also running. So, when this code runs for T2 and waits for 1 second, T2 had been running for 2s. So, effectively T1 would be given a time of 1 second, T2 a time of 2 seconds and T3 a time of 3 seconds and so on.

This may be what you want. However, if you want all the tasks to finish executing within 1 second. You shouldn’t use Task.await. You can use Task.yield_many which takes a list of tasks and allows you to specify a timeout after which it returns with the results of whatever processes finished. The documentation for Task.yield_many has a very good example on how to use it.

@benwilson512 has a good example on this

..suppose you wrote the following code:

1
2
3
4
task = Task.async(fn -> Process.sleep(:infinity) end)
Process.sleep(5_000)
Task.await(task, 5_000)

How long before it times out? 10 seconds of course. But this is obvious and expected. This is exactly what you’re doing by making the Task.await calls consecutive. It’s just that instead of sleeping in the main process you’re waiting on a different task. Task.await is blocking, this is expected.

How to control pianobar using global hotkeys using Tmux

I love pianobar. However, until yesterday I hated pausing and moving to the next video using pianobar. I had a small terminal dedicated for pianobar and every time I had to change the song or pause, I used to select the window and then hit the right shortcut. I love hotkeys, the allow you to control your stuff without opening windows. I also happen to use tmux a lot. And it hit me yesterday, I could have easily bound hotkeys to send the right key sequences to pianobar running a tmux session. Here is how I did it.

I use xmonad, so I wired up Windows + Shift + p to tmux send-keys -t scratch:1.0 p &> /tmp/null.log So, now whenever I hit the right hotkey it types the letter ‘p’ in the tmux session scratch window 1 and pane 0, where I have pianobar running.

I use xmonad, but you should be able to put these in a wrapper script and wire them up with any window manager or with unity.

1
2
3
4
5
6
7
-- relevant configuration
, ((modMask .|. shiftMask, xK_p ), spawn "tmux send-keys -t scratch:1.0 p &> /tmp/null.log") -- %! Pause pianobar
, ((modMask .|. shiftMask, xK_v ), spawn "tmux send-keys -t scratch:1.0 n &> /tmp/null.log") -- %! next pianobar
, ((modMask, xK_c ), spawn "mpc toggle") -- %! Pause mpd
, ((modMask, xK_z ), spawn "mpc prev") -- %! previous in mpd
, ((modMask, xK_v ), spawn "mpc next") -- %! next in mpd

How to use pianobar with a socks5 proxy to play pandora

I love pandora, However, I live in India where pandora is doesn’t stream. I got around this by proxying over socks5. Here is how you can do it.

  1. First you need access to a socks 5 proxy, If you have an ssh server running in the US or any country where pandora streams, you can spin up a proxy connection by running the following command ssh -D 1337 -f -C -q -N username@yourserver.com
  2. Once you have this running you’ll need to change your pianobar config to make it use this proxy
    1
    2
    3
    4
    # ~/.config/pianobar/config
    password = yoursecretpasswordinplaintext
    user = youremail
    proxy = socks5://localhost:1337/

Once you have this setup, you can just run the pianobar command and it will start playing your favorite music.

A simple ticker to receive tick events for every interval in Elixir

Go has very utilitarian ticker methods, for instance check: https://gobyexample.com/tickers

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package main
import "time"
import "fmt"
func main() {
// Tickers use a similar mechanism to timers: a
// channel that is sent values. Here we'll use the
// `range` builtin on the channel to iterate over
// the values as they arrive every 500ms.
ticker := time.NewTicker(time.Millisecond * 500)
go func() {
for t := range ticker.C {
fmt.Println("Tick at", t)
}
}()
// Tickers can be stopped like timers. Once a ticker
// is stopped it won't receive any more values on its
// channel. We'll stop ours after 1600ms.
time.Sleep(time.Millisecond * 1600)
ticker.Stop()
fmt.Println("Ticker stopped")
}

These are very nice for running code at every interval. If you want something like this in Elixir, it can be implemented in a few lines of code.

This code allows you to create a Ticker process by calling Ticker.start with a recipient_pid which is the process which receives the :tick events, a tick_interval to specify how often a :tick event should be sent to the recipient_pid and finally a duration whose default is :infinity which means it will just keep ticking till the end of time. Once you set this up, the recipient will keep getting :tick events for every tick_interval. Go ahead and read the code, it is pretty self explanatory.

There is also erlang’s :timer.apply_interval(time, module, function, arguments) which will run some code for every interval of time. However, the code below doesn’t create overlapping executions.

I have also created a gist in the interest of collaboration here: https://gist.github.com/minhajuddin/064226d73d5648aa73364218e862a497

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
defmodule Ticker do
require Logger
# public api
def start(recipient_pid, tick_interval, duration \\ :infinity) do
# Process.monitor(pid) # what to do if the process is dead before this?
# start a process whose only responsibility is to wait for the interval
ticker_pid = spawn(__MODULE__, :loop, [recipient_pid, tick_interval, 0])
# and send a tick to the recipient pid and loop back
send(ticker_pid, :send_tick)
schedule_terminate(ticker_pid, duration)
# returns the pid of the ticker, which can be used to stop the ticker
ticker_pid
end
def stop(ticker_pid) do
send(ticker_pid, :terminate)
end
# internal api
def loop(recipient_pid, tick_interval, current_index) do
receive do
:send_tick ->
send(recipient_pid, {:tick, current_index}) # send the tick event
Process.send_after(self, :send_tick, tick_interval) # schedule a self event after interval
loop(recipient_pid, tick_interval, current_index + 1)
:terminate ->
:ok # terminating
# NOTE: we could also optionally wire it up to send a last_tick event when it terminates
send(recipient_pid, {:last_tick, current_index})
oops ->
Logger.error("received unexepcted message #{inspect oops}")
loop(recipient_pid, tick_interval, current_index + 1)
end
end
defp schedule_terminate(_pid, :infinity), do: :ok
defp schedule_terminate(ticker_pid, duration), do: Process.send_after(ticker_pid, :terminate, duration)
end
defmodule Listener do
def start do
Ticker.start self, 500, 2000 # will send approximately 4 messages
end
def run do
receive do
{:tick, _index} = message ->
IO.inspect(message)
run
{:last_tick, _index} = message ->
IO.inspect(message)
:ok
end
end
end
Listener.start
Listener.run
# will output
# => {:tick, 0}
# => {:tick, 1}
# => {:tick, 2}
# => {:tick, 3}
# => {:last_tick, 4}

Lets encrypt auto renewal for ubuntu and nginx

Create a file called /etc/nginx/le_redirect_include.conf

1
2
3
4
5
6
7
8
9
10
# intercept the challenges
location '/.well-known/acme-challenge' {
default_type "text/plain";
root /usr/share/nginx/letsencrypt;
}
# redirect all traffic to the https version
location / {
return 301 https://$host$request_uri;
}

In your redirect block include this file

1
2
3
4
server {
server_name www.liveformhq.com liveformhq.com;
include /etc/nginx/le_redirect_include.conf;
}

To generate the LE keys run the following

1
2
3
4
5
sudo mkdir -p /usr/share/nginx/letsencrypt
# generate the certificate
sudo letsencrypt certonly --webroot=/usr/share/nginx/letsencrypt --domain cosmicvent.com --domain www.cosmicvent.com
# reload nginx
sudo kill -s HUP $(cat /var/run/nginx.pid)

Put the following in your crontab

1
2
$ sudo crontab -e
@weekly /usr/bin/letsencrypt &> /tmp/letsencrypt.log; sudo kill -s HUP $(cat /var/run/nginx.pid)

Algorithm to compute downtime of a service/server

I am working on an open source side project called webmonitorhq.com It notifies you when your service goes down. It also stores the events when a service goes down and comes back up. And I wanted to show the uptime of a service for a duration of 24 hours, 7 days etc,.

This is the algorithm I came up with, Please point out any improvements that can be made to it. I’d love to hear them.

The prerequisite to this algorithm is that you have data for the UP events and the DOWN events

I have a table called events with an event string and an event_at datetime

events
id
event (UP or DOWN)
event_at (datetime of event occurence)

Algorithm to calculate downtime

  1. Decide the duration (24 hours, 7 days, 30 days)
  2. Select all the events in that duration
  3. Add an UP event at the end of the duration
  4. Add a inverse of the first event at the beginning of this duration e.g. if the first event is an UP add a DOWN and vice versa
  5. Start from the first UP event after a DOWN event and subtract the DOWN event_at from the UP event_at, do this till you reach the end. This gives you the downtime
  6. Subtract duration from downtime to get uptime duration

e.g.

  1. 24 hour duration. Current Time is 00hours
  2. UPat1 DOWNat5 UPat10
  3. UPat1 DOWNat5 UPat10 UPat24
  4. DOWNat0 UPat1 DOWNat5 UPat10 UPat24
  5. UPat1 - DOWNat0 + UPat10 - DOWNat5 Downtime = 1 + 5
  6. 24 - 6 => 18

Elixir IO.inspect to debug pipelines

While writing multiple pipelines, you may want to debug the intermediate values. Just insert |> IO.inspect between your pipelines.

e.g. in the expression below:

1
2
3
4
:crypto.strong_rand_bytes(32)
|> :base64.encode_to_string
|> :base64.decode
|> :base64.encode

If we want to check intermediate values we just need to add a |> IO.inspect

1
2
3
4
5
6
7
8
:crypto.strong_rand_bytes(32)
|> IO.inspect
|> :base64.encode_to_string
|> IO.inspect
|> :base64.decode
|> IO.inspect
|> :base64.encode
|> IO.inspect

This will print all the intermediate values to the STDOUT. IO.inspect is a function which prints the input and returns it.

How to store temporary data and share it with your background processor

In my current project, I had to store some temporary data for a user and let a few background processors have access to it. I wrote something small with a dependency on Redis which does the job.

It allows me to use current_user.tmp[:token_id] = "somedata here" and then access it in the background processor using user.tmp[:token_id] which I think is pretty neat.

Moreover, since my use case needed this for temporary storage, I set it to auto expire in 1 day. If yours is different you could change that piece of code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# /app/models/user_tmp.rb
class UserTmp
attr_reader :user
def initialize(user)
@user = user
end
EXPIRATION_SECONDS = 1.day
SERIALIZER = Marshal
def [](key)
serialized_val = Redis.current.get(namespaced_key(key))
SERIALIZER.load(serialized_val) if serialized_val
end
def []=(key, val)
serialized_val = SERIALIZER.dump(val)
Redis.current.setex(namespaced_key(key), EXPIRATION_SECONDS, serialized_val)
end
private
def namespaced_key(key)
"u:#{user.id}:#{key}"
end
end

And here is the user class

1
2
3
4
5
6
7
8
9
10
# /app/models/user.rb
class User < ActiveRecord::Base
#...
def tmp
@tmp ||= UserTmp.new(self)
end
#...
end

Hope you find it useful :)

Subdomains to restrict from your SaaS app

Many SaaS apps allow users to host their websites under their root domain, e.g. GitHub allows you to host your sites using GitHub Pages under the .github.io domain.

Here is a list of subdomains which you should reserve while building your own SaaS product.

I usually put this data in a /data/reserved_subdomains file and then use it like below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class Site < ActiveRecord::Base
#...
# validations
SUBDOMAIN_RX = /\A[a-z\d]+(-[a-z\d]+)*\Z/i
validates :subdomain, presence: true,
uniqueness: true,
length: { in: 4..63 , unless: Proc.new{ user && user.admin? }},
format: {:with => SUBDOMAIN_RX},
exclusion: { in: File.read(Rails.root.join("./data/reserved_subdomains")).each_line.map{|x| x.strip} , unless: Proc.new{ user && user.admin? }}
#...
end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
about
abuse
access
account
accounts
address
admanager
admin
admindashboard
administration
administrator
administrators
admins
adsense
adult
advertising
adwords
affiliate
affiliates
ajax
analytics
android
anon
anonymous
api1
api2
api3
apps
archive
assets
assets1
assets2
assets3
assets4
assets5
atom
auth
authentication
avatar
backup
banner
banners
beta
billing
billings
blog
blogs
board
bots
business
cache
cadastro
calendar
campaign
careers
chat
client
cliente
clients
cname
code
comercial
community
compare
compras
config
connect
contact
contest
copyright
cpanel
create
css1
css2
css3
dashboard
data
delete
demo
design
designer
devel
developer
developers
development
directory
docs
domain
donate
download
downloads
ecommerce
edit
editor
email
e-mail
example
favorite
feed
feedback
feeds
file
files
flog
follow
forum
forums
free
gadget
gadgets
games
gettingstarted
graph
graphs
group
groups
guest
help
home
homepage
host
hosting
hostmaster
hostname
html
http
httpd
https
image
images
imap
img1
img2
img3
inbox
index
indice
info
information
intranet
invite
invoice
invoices
ipad
iphone
jabber
jars
java
javascript
jobs
knowledgebase
launchpad
legal
list
lists
login
logout
logs
mail
mail1
mail2
mail3
mail4
mail5
mailer
mailing
main
manage
manager
marketing
master
media
message
messages
messenger
microblog
microblogs
mine
mobile
movie
movies
music
musicas
mysql
name
named
network
networks
news
newsite
newsletter
nick
nickname
notes
noticias
official
online
operator
order
orders
page
pager
pages
panel
partner
partnerpage
partners
password
payment
payments
perl
photo
photoalbum
photos
pics
picture
pictures
plugin
plugins
policy
pop3
popular
portal
post
postfix
postmaster
posts
press
privacy
private
profile
project
projects
promo
public
python
random
redirect
register
registration
resolver
root
ruby
sale
sales
sample
samples
sandbox
script
scripts
search
secure
security
send
server
servers
service
setting
settings
setup
shop
signin
signup
site
sitemap
sitenews
sites
smtp
soporte
sorry
staff
stage
staging
start
stat
static
statistics
stats
status
store
stores
subdomain
subscribe
suporte
support
survey
surveys
system
tablet
tablets
talk
task
tasks
teams
tech
telnet
test
test1
test2
test3
teste
tests
theme
themes
todo
tools
trac
translate
update
upload
uploads
usage
user
username
usernames
users
usuario
validation
validations
vendas
video
videos
visitor
webdisk
webmail
webmaster
website
websites
whois
wiki
workshop
www1
www2
www3
www4
www5
www6
www7
wwws
wwww
yourdomain
yourname
yoursite
yourusername

Script to cleanup old directories on a linux server

Here is a simple script which can cleanup directories older than x days on your server It is useful for freeing up space by removing temporary directories on your server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/bash
# usage
# # deletes dirs inside /opt/builds which are older than 3 days
# delete-old-dirs.sh /opt/builds 3
# cron entry to run this every hour
# 0 * * * * /usr/bin/delete-old-dirs.sh /opt/builds 2 >> /tmp/delete.log 2>&1
# cron entry to run this every day
# 0 0 * * * /usr/bin/delete-old-dirs.sh /opt/builds 2 >> /tmp/delete.log 2>&1
if ! [ $# -eq 2 ]
then
cat <<EOS
Invalid arguments
Usage:
delete-old-dirs.sh /root/directory/to-look/for-temp-dirs days-since-last-modification
e.g. > delete-old-dirs.sh /opt/builds 3
EOS
exit 1
fi
root=$1
ctime=$2
for dir in $(find $root -mindepth 1 -maxdepth 1 -type d -ctime +"$ctime")
do
# --format %n:filename, %A:access rights, %G:Group name of owner, %g: Group ID of owner, %U: User name of owner, %u: User ID of owner, %y: time of last data modification
echo "removing: $(stat --format="%n %A %G(%g) %U(%u) %y" "$dir")"
rm -rf "$dir"
done

Put this in your code to debug anything

Aaron Patterson wrote a very nice article on how he does deubgging.

Here is some more code to make your debugging easier.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Object
def dbg
self.tap do |x|
puts ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
puts x
puts "<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<"
end
end
end
# now you can turn the following:
get_csv.find{|x| x[id_column] == row_id}
# into =>
get_csv.dbg.find{|x| x[id_column.dbg] == row_id.dbg}.dbg

Update:

Josh Cheek has taken this to the 11th level here: https://gist.github.com/JoshCheek/55b53e2faa2776d6a054#file-ss-png Awesome stuff :)

How to open the most recent file created in Vim

When working with Static Site Blogs, you end up creating files with very long names for your blog posts. For example, this very post has a filename source/_posts/how-to-open-the-most-recent-file-created-in-vim.md.

Now finding this exact file in hundreds of other files and opening them is a pain. Here is a small script which I wrote by piecing together stuff from the internet.

1
2
3
4
5
6
7
8
9
# takes 1 argument
function latest(){
# finding latest file from the directory and ignoring hidden files
echo $(find $1 -type f -printf "%T@|%p\n" | sort -n | grep -Ev '^\.|/\.' | tail -n 1 | cut -d '|' -f2)
}
function openlatest(){
${EDITOR-vim} "$(latest $1)"
}

Now, I can just run openlatest source to open up the file source/_posts/how-to-open-the-most-recent-file-created-in-vim.md in vim and start writing.

This technique can also be used to open the latest rails migration. Hope, this function finds a home in your ~/.bashrc :)