In my 10 years in the software industry. I have create a number of products and have worked
on a lot of projects. Looking back at the products/projects that have been successful, one this
stands out. There are 3 critical pieces for a software product / startup.
0. A value proposition
This is a dead giveaway. So, I haven’t even counted this. Without a value proposition you don’t have anything. Your product
must provide value to your customers.
1. Domain knowledge
You need to have someone on your team with the knowledge of the domain. Ideally you would have
come up with a product idea because of a good understanding of the pain points. And of the things
that can provide value. This is also fairly easy to understand.
2. Marketing Strength
You also need to have someone who can market your product. An awesome product without marketing is a dead product.
You need to either build your marketing expertise or get someone who is good at it.
Marketing is one of the things that is often overlooked. People think that if the product is good people will buy it.
This is completely false. You need a lot of hustle to market your product.
3. Technical expertise
You obviously need someone who can build a usable product which provides value.
But this is the last on the list.
Many people come to me with ideas for startups. I always tell them about these 3 things.
The next time you want to build a startup, think about these skills. Without one of these you are dead in the water.
I had a good time presenting a talk about “Getting Started with Elm” at the awesome nghyderabad
The audience was very interactive and the food was great. Shout out to Fission Labs for the awesome venue!
Here are a few useful links which should help you learn Elm
If you run into the following error while running your Ecto migrations
ReleaseTasks.migrate
** (Ecto.MigrationError) migrations can't be executed, migration name foo_bar is duplicated
(ecto) lib/ecto/migrator.ex:259: Ecto.Migrator.ensure_no_duplication/1
(ecto) lib/ecto/migrator.ex:235: Ecto.Migrator.migrate/4
You can fix it by running 1 migration at a time
mix ecto.migrate --step 1
This happens when you are trying to run two migrations with the same name (regardless of the timestamps).
By restricting it to run 1 migration at a time you won’t run into this issue.
Ideally you should not have 2 migrations with the same name :)
jq is an awesome utility for parsing and transforming json via the command line. I wanted something similar for xml.
The following is a short ruby script which does a tiny tiny (did I say tiny?) bit of what jq does for xml. Hope you find it useful.
if ARGV.count < 2 puts <<-EOS Usage: xml_pluck xpath file1.xml file2.xml e.g. xml_pluck "//children/name/text()" <(echo '<?xml version="1.0"?><children><name>Zainab</name><name>Mujju</name></children>') # prints Zainab Mujju EOS exit -1 end
In a previous blog post we saw how to do case insensitive retrieval from maps.
A better solution for this if there are many key lookups is to transform the input by lower casing all the keys just after decoding. The solution from the blog post would iterate over each {key, value} pair till it found the desired key.
However a proper map lookup doesn’t iterate over the keys but uses a hashing algorithm to get to the key’s location in constant time regardless of the size of the map.
Anyway, Here is the solution to transform each key for input JSON. Hope you find it useful :)
# ~/.inputrc #vim key bindings set editing-mode vi set keymap vi # do not bell on tab-completion set bell-style bell
set expand-tilde off set input-meta off set convert-meta on set output-meta off set horizontal-scroll-mode off set history-preserve-point on set mark-directories on set mark-symlinked-directories on set match-hidden-files off
# completion settings set page-completions off set completion-query-items 2000 set completion-ignore-case off set show-all-if-ambiguous on set show-all-if-unmodified on set completion-prefix-display-length 10 set print-completions-horizontally off
Once you set this up, many repls will respect these bindings. For instance irb, pry respect these. As a matter of fact any good terminal app which use the readline library will respect this.
Tmux is another software that has vim bindings
So, whenever I work with someone people always seem to be impressed that vim can do so much so simply.
This is really the power of vim, vim was built for text editing and it is the best for this job. However, learning it can be quite painful and many people will abandon learning it in a few days.
There is a very popular learning curve graph about vim
The part about vim is partially true, in that once it clicks everything falls into place.
Notepad is an editor which is very easy to use, but if you compare it to programming languages it has the capability of a calculator. You put your cursor in a place type stuff and that is all.
Vim lets you speak to it, in an intelligent way Anyway, I am rambling at this point.
The reason I am writing this blog post in the middle of the night is because many people ask me “How should I setup vim?”, I’d love to have it look/work like yours.
And many times I point them to my vimrc.
However, if you are planning on learning vim, don’t go there. Start with the following ~/.vimrc
" Ctrlp.vim let g:ctrlp_map = '<c-p>' let g:ctrlp_cmd = 'CtrlP' let g:ctrlp_working_path_mode = 'ra' let g:ctrlp_custom_ignore = { \ 'dir': '\v[\/]\.(git|hg|svn)$', \ 'file': '\v\.(exe|so|dll)$', \ 'link': 'some_bad_symbolic_links', \ }
That is all, no more no less.
To finish the installation, you need to do 2 things:
Run curl -fLo ~/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
Run vim +PlugInstall from your terminal
A few simple tips on how to learn vim properly:
Finish vimtutor on your terminal 3 to 4 times. Read everything 3 to 4 times and actually practice it.
Learn about vim movements, commands and modes
Open your vim editor at the root of the project and have just one instance open, don’t open more than one instance per project. This is very very important. I can’t stress this enough. To open another file from your project, hit Ctrl+P
Start with a simple vimrc, The one I pasted above is a good start.
Learn about buffers / windows and tabs in vim and how to navigate them.
Add 1 extension that you think might help every month. And put a few sticky notes with its shortcuts/mappings on your monitor.
NAME
git-describe - Describe a commit using the most recent tag reachable from it
DESCRIPTION
The command finds the most recent tag that is reachable from a commit. If the tag points to the commit, then only the tag is shown. Otherwise, it suffixes the tag name with the
number of additional commits on top of the tagged object and the abbreviated object name of the most recent commit.
So, if you have a tag v1.0 like above, and if you have 10 commits on top of it, git-describe will print v1.0-100-g1c7ef8b where v1.0 is the latest git tag reachable from the
current commit, 100 is the number of commits since then and g1c7ef8b is the short commit hash of the current commit. We can easily transform this to 1.0.100 using the above helper function.
Now, you have a nice way of automatically managing versions. The patch version is bumped whenever a commit is made, the major and minor version can be changed by creating a new tag, e.g. v1.2
This is very useful when you are using distillery for building your releases
I ran into an issue with inconsistent naming of keys in one of my provider’s json.
This is really bad data quality, the data that is being sent should have consistent key names.
Either uppper, lower, or capitalized, but consistent. Anyway, this provider was sending data will all kinds of mixed case keys.
Here is some elixir code that I wrote to get keys using a case insensitive match.
There is an issue on the Poison decoder project which should render this useless, however till that is fixed you can use the code below:
While working with XML data, you often don’t find the WSDL files and may end up
manually working through the document to understand its structure. At my current project
I ran into a few hundred XML files and had to analyze them to understand the data available.
Here is a script I created which prints all the possible nodes in the input files
I recently released LiveForm which is a service which gives you form endpoints (I’d love to have you check it out :) )
I wanted to show my blog’s content on the home page, It is pretty straightforward with the rich ruby ecosystem.
First you need a way to get the data from your blog. The LiveForm blog has an atom feed at http://blog.liveformhq.com/atom.xml . I initially used RestClient to get the data from the feed.
Once we have the feed, we need to parse it to extract the right content. Some quick googling led me to the awesome feedjira gem, (I am not gonna comment about the awesome name:))
feedjira actually has a simple method to parse the feed from a URL Feedjira::Feed.fetch_and_parse(url)
Once I got the entries, I just had to format them properly. However, there was an issue with summaries of blog posts having malformed html. This was due to naively slicing the blog post content at 200 characters by hexo (the blog engine I use), Nokogiri has a simple way of working around this. However, I went one step further and removed all html markup from the summary so that it doesn’t mess with the web application’s markup: Nokogiri::HTML(entry.summary).css("body").text
Finally, I didn’t want to fetch and parse my feed for every user that visited my website. So, I used fragment caching to render the feed once every day.
If you find issues or can improve this guide, please create a pull request at:
2. Setup the server
We’ll be running our server under the user called slugex. So, we first need
to create that user.
1 2 3 4 5 6 7 8 9
## commands to be executed on our server APP_USER=slugex
# create prent dir for our home sudo mkdir -p /opt/www # create the user sudo useradd --home "/opt/www/$APP_USER" --create-home --shell /bin/bash $APP_USER # create the postgresql role for our user sudo -u postgres createuser --echo --no-createrole --no-superuser --createdb $APP_USER
3. Install the git-deploy rubygem on our local computer
We’ll be using the git-deploy rubygem to
do deploys. This allows deploys similar to Heroku. You just need to push to your
production git repository to start a deployment.
1 2 3 4
## commands to be executed on our local computer # install the gem # you need ruby installed on your computer for this gem install git-deploy
4. Setup distillery in our phoenix app (on local computer)
# get dependencies mix deps.get # init distillery mix release.init
Change rel/config.ex to look like below
1 2 3 4 5 6 7
... environment :proddo set include_erts:false set include_src:false # cookie info ... end ...
5. Setup git deploy (local computer)
Let us setup the remote and the deploy hooks
1 2 3 4 5 6 7 8 9 10 11
## commands to be executed on our local computer
# setup the git remote pointing to our prod server git remote add prod slugex@slugex.com:/opt/www/slugex
# init git deploy setup -r "prod" # create the deploy files git deploy init # push to production git push prod master
TODO: release this as a book
6. Setup postgresql access
1 2 3 4 5 6 7 8 9 10
## commands to be executed on the server as the slugex user
# create the database createdb slugex_prod
# set the password for the slugex user psql slugex_prod > slugex_prod=> \password slugex > Enter new password: enter the password > Enter it again: repeat the password
6. Setup the prod.secret.exs
Copy the config/prod.secret.exs file from your local computer to /opt/www/slugex/config/prod.secret.exs
1 2
## on local computer from our phoenix app directory scp config/prod.secret.exs slugex@slugex.com:config/
create a new secret on your local computer using mix phoenix.gen.secret and
paste it in the server’s config/prod.secret.exs secret
# on the server # /opt/www/slugex/config/prod.secret.exs use Mix.Config
config :simple, Simple.Endpoint, secret_key_base:"RgeM4Dt8kl3yyf47K1DXWr8mgANzOL9TNOOiCknZM8LLDeSdS1ia5Vc2HkmKhy68" http: [port:4010], server:true, # <=== this is very important root:"/opt/www/slugex", url: [host:"slugex.com", port:443], cache_static_manifest:"priv/static/manifest.json"
# Do not print debug messages in production config :logger, level::info
# Configure your database config :simple, Simple.Repo, adapter: Ecto.Adapters.Postgres, username:"slugex", password:"another brick in the wall", database:"slugex_prod", pool_size:20
6. Tweak the deploy scripts
7. One time setup on the server
1 2 3
## commands to be executed on server as slugex MIX_ENV=prod mix do compile, ecto.create MIX_ENV=prod ./deploy/after_push
Logger
Exception notifications
Setup systemd
6. One time setup on server (on server as slugex user)
1 2 3 4 5 6 7 8 9 10 11 12 13
## commands to be run on the server as the slugex user cd /opt/www/slugex
# create the secrets config echo'use Mix.Config' > config/prod.secrets.exs # add your configuration to this file
Erlang and by extension Elixir have powerful pattern matching constructs, which
allow you to easily extract bits from a binary. Here is an example which takes
a binary and returns their bits
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
defmoduleBitsdo # this is the public api which allows you to pass any binary representation defextract(str) when is_binary(str) do extract(str, []) end
# this function does the heavy lifting by matching the input binary to # a single bit and sends the rest of the bits recursively back to itself defpextract(<<b :: size(1), bits :: bitstring>>, acc) when is_bitstring(bits) do extract(bits, [b | acc]) end
# this is the terminal condition when we don't have anything more to extract defpextract(<<>>, acc), do: acc |> Enum.reverse end
defmoduleHardWorkerdo defwork(id) do Process.sleep(id * 900) {:done, id} end end
defmoduleRunnerdo @total_timeout1000
defrundo {us, _} = :timer.tc &work/0 IO.puts "ELAPSED_TIME: #{us/1000}" end
defworkdo tasks = Enum.map 1..10, fn id -> Task.async(HardWorker, :work, [id]) end
Enum.map(tasks, fn task -> Task.await(task, @total_timeout) end) end
end
Runner.run
Looks simple enough, we loop over and create 10 processes and then wait
for them to finish. It also prints out a message ELAPSED_TIME: _ at the end where
_ is the time taken for it to run all the processes.
Can you take a guess how long this runner will take in the worst case?
If you guessed 10 seconds, you are right! I didn’t guess 10 seconds when I first
saw this kind of code. I expected it to exit after 1 second. However, the key
here is that Task.await is called on 10 tasks and if the 10 tasks finish
at the end of 1s, 2s, … 10s This code will run just fine.
This is a completely made up example but it should show you that running in parallel
with timeouts is not just a Task.await away.
Let us take another look at the relevant code. Now, let us say that this is spawning
processes P1 to P10 in that order. Let’s say tasks T1 to T10 are created for these processes.
Now all these tasks are running concurrently.
Now, in the second Enum.map, in the first iteration the task is set to T1,
so T1 has to finish before 1 second otherwise this code will timeout. However,
while T1 is running T2..T10 are also running. So, when this code runs for T2 and
waits for 1 second, T2 had been running for 2s. So, effectively T1 would be given
a time of 1 second, T2 a time of 2 seconds and T3 a time of 3 seconds and so on.
This may be what you want. However, if you want all the tasks to finish executing within 1 second.
You shouldn’t use Task.await. You can use Task.yield_many which takes a list of tasks
and allows you to specify a timeout after which it returns with the results of whatever
processes finished. The documentation for Task.yield_many has a very good
example on how to use it.
How long before it times out? 10 seconds of course. But this is obvious and expected.
This is exactly what you’re doing by making the Task.await calls consecutive.
It’s just that instead of sleeping in the main process you’re waiting on a different task.
Task.await is blocking, this is expected.
I love pianobar. However, until yesterday I hated pausing and moving to the next video
using pianobar. I had a small terminal dedicated for pianobar and every time I had to
change the song or pause, I used to select the window and then hit the right shortcut.
I love hotkeys, the allow you to control your stuff without opening windows. I also happen
to use tmux a lot. And it hit me yesterday, I could have easily bound hotkeys to send the
right key sequences to pianobar running a tmux session. Here is how I did it.
I use xmonad, so I wired up Windows + Shift + p to tmux send-keys -t scratch:1.0 p &> /tmp/null.log
So, now whenever I hit the right hotkey it types the letter ‘p’ in the tmux session scratch window 1 and pane 0, where I have pianobar running.
I use xmonad, but you should be able to put these in a wrapper script and wire them up with any window manager or with unity.
I love pandora, However, I live in India where pandora is doesn’t stream.
I got around this by proxying over socks5. Here is how you can do it.
First you need access to a socks 5 proxy, If you have an ssh server running in the US or any country where pandora streams, you can spin up a proxy connection by running the following command
ssh -D 1337 -f -C -q -N username@yourserver.com
Once you have this running you’ll need to change your pianobar config to make it use this proxy
// Tickers use a similar mechanism to timers: a // channel that is sent values. Here we'll use the // `range` builtin on the channel to iterate over // the values as they arrive every 500ms. ticker := time.NewTicker(time.Millisecond * 500) gofunc() { for t := range ticker.C { fmt.Println("Tick at", t) } }()
// Tickers can be stopped like timers. Once a ticker // is stopped it won't receive any more values on its // channel. We'll stop ours after 1600ms. time.Sleep(time.Millisecond * 1600) ticker.Stop() fmt.Println("Ticker stopped") }
These are very nice for running code at every interval. If you want something like this in Elixir,
it can be implemented in a few lines of code.
This code allows you to create a Ticker process by calling Ticker.start with a recipient_pid
which is the process which receives the :tick events, a tick_interval to specify how often
a :tick event should be sent to the recipient_pid and finally a duration whose default is
:infinity which means it will just keep ticking till the end of time. Once you set this up,
the recipient will keep getting :tick events for every tick_interval.
Go ahead and read the code, it is pretty self explanatory.
There is also erlang’s :timer.apply_interval(time, module, function, arguments) which will run
some code for every interval of time. However, the code below doesn’t create overlapping executions.
defmoduleTickerdo require Logger # public api defstart(recipient_pid, tick_interval, duration \\ :infinity) do # Process.monitor(pid) # what to do if the process is dead before this? # start a process whose only responsibility is to wait for the interval ticker_pid = spawn(__MODULE__, :loop, [recipient_pid, tick_interval, 0]) # and send a tick to the recipient pid and loop back send(ticker_pid, :send_tick) schedule_terminate(ticker_pid, duration) # returns the pid of the ticker, which can be used to stop the ticker ticker_pid end
defstop(ticker_pid) do send(ticker_pid, :terminate) end
# internal api defloop(recipient_pid, tick_interval, current_index) do receive do :send_tick -> send(recipient_pid, {:tick, current_index}) # send the tick event Process.send_after(self, :send_tick, tick_interval) # schedule a self event after interval loop(recipient_pid, tick_interval, current_index + 1) :terminate -> :ok# terminating # NOTE: we could also optionally wire it up to send a last_tick event when it terminates send(recipient_pid, {:last_tick, current_index}) oops -> Logger.error("received unexepcted message #{inspect oops}") loop(recipient_pid, tick_interval, current_index + 1) end end
defpschedule_terminate(_pid, :infinity), do::ok defpschedule_terminate(ticker_pid, duration), do: Process.send_after(ticker_pid, :terminate, duration) end
defmoduleListenerdo defstartdo Ticker.start self, 500, 2000# will send approximately 4 messages end
defrundo receive do {:tick, _index} = message -> IO.inspect(message) run {:last_tick, _index} = message -> IO.inspect(message) :ok end end end
I am working on an open source side project called webmonitorhq.com
It notifies you when your service goes down. It also stores the events when a service goes down
and comes back up. And I wanted to show the uptime of a service for a duration of 24 hours, 7 days etc,.
This is the algorithm I came up with, Please point out any improvements that can be made to it. I’d love to hear them.
The prerequisite to this algorithm is that you have data for the UP events and the DOWN events
I have a table called events with an event string and an event_at datetime
events
id
event (UP or DOWN)
event_at (datetime of event occurence)
Algorithm to calculate downtime
Decide the duration (24 hours, 7 days, 30 days)
Select all the events in that duration
Add an UP event at the end of the duration
Add a inverse of the first event at the beginning of this duration
e.g. if the first event is an UP add a DOWN and vice versa
Start from the first UP event after a DOWN event and subtract the DOWN event_at from the UP event_at, do this till you reach the end. This gives you the downtime
Subtract duration from downtime to get uptime duration