make evil monkey nag you back to work

###Update: fixed the cron entry

I’ve read a very interesting article about “Why programmers work at night”. One of the points the author talks about is “how we get engrossed in twitter/hacker news/reddit”. I’ve felt the same. I think one of the reasons why we(programmers/developers) spend a lot of our time on twitter/hacker news/reddit is, because, we don’t have any idea of the time. Time just flies by. So, I created a small ruby script which nags you to get back to work :)

##~/.scripts/nagger

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

#!/usr/bin/env ruby
require 'time'

exit if File.exists?("/tmp/stop-nagging")
#see what I did here ;)

#run the below command to find your display
#env | grep DISPLAY
ENV['DISPLAY'] = ':0.0'

last_line = `tail -2 ~/.gtimelog/timelog.txt`.lines.map{|x| x.chomp}.reject{|x| x.empty?}.reverse.first
minutes = ((Time.now - Time.parse(last_line[11, 5])) / 60).round
evil_monkey = File.expand_path File.join(File.dirname(__FILE__), 'evil-monkey.gif')

if minutes > 30
`notify-send -i '#{evil_monkey}' "It's been #{minutes} minutes since your last log"`
end

##cron entry

1
2
3

0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/bash -l -c '/home/minhajuddin/.scripts/nagger'

Evil monkey nagging me to get back to work

Hope it helps you get back to work too :). By the way, I use the awesome gtimelog app to log my time.

script to do a global search and replace in a git repository

There are many instances where I had to replace some variable name in all my files. I use a small script to do this, Hope it helps you too.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

#!/bin/bash
#~/.scripts/git-sub
#Author: Khaja Minhajuddin <minhajuddin@cosmicvent.com>
#script which does a global search and replace in the git repository
#it takes two arguments
#e.g. git sub OLD NEW

old=$1
new=$2

for file in $(git grep $old | cut -d':' -f 1 | uniq)
do
echo "replacing '$old' with '$new' in '$file'"
sed -i -e "s/$old/$new/g" $file
done

Just remember to add it to a directory which is in the $PATH. I have it in my ~/.scripts directory which is included in the $PATH. Name it git-sub and give it executable permissions using chmod +x ~/.scripts/git-sub. Now, you can just call git sub old_var new_var on terminal and it will do a global search and replace of all the files in the repository.

elegance of functional programming

Functional programming allows you to write concise and elegant code. Mainstream languages like Ruby and C# support a lot of functional programming paradigms, and learning them makes you a better programmer. Below is a small example which demonstrates that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

#6 lines of ugly code
i = 0
tasks = list.tasks
while(i < tasks.length - 2)
tasks[i].priority.should >= tasks[i + 1].priority
i += 1
end


#3 lines of elegant functional code
list.tasks.each_cons(2).each do |t1, t2|
t1.priority.should >= t2.priority
end

gc your git repositories automatically with a cron task

I have a lot of git code repositories, and I usually gc (garbage collect) them manually by running the git gc command every now and then. Tasks like these are prime candidates for automating with cron. Below is a cron entry and the script which gcs my repositories. Hope you guys find it useful.

###the script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

#!/bin/bash
#author: Khaja Minhajuddin
#email: minhajuddin.k@gmail.com
#path /home/minhajuddin/.cron/reboot.sh
#description: script which is executed everytime computer starts

#git gc repos
REPO_DIRS=$(cat <<EOS
$HOME/repos
$HOME/repos/core
EOS
)

for repo_dir in $REPO_DIRS
do
echo "checking for git repos in $repo_dir"
for repo in $(ls $repo_dir)
do
cd $repo_dir/$repo
if [[ -d .git ]]
then
echo "garbage collecting $repo"
git gc
fi
done
done

###the crontab entry

1
2
3
4
5

$ crontab -e
#add the line below into the editor and save it
@reboot $HOME/.cron/reboot.sh

Bonus tip: If you have a gitosis server, put the following script at ~git/.cron/reboot.sh and perform the above step for your git user.

###the gitosis git user script

1
2
3
4
5
6
7
8
9
10

#!/bin/bash

for repo in $(ls ~/repositories)
do
cd ~/repositories/$repo
echo "garbage collecting $repo"
git gc
done

automatically push your git repo to a server on shutdown

Sometimes, I forget to push my git commits to our git server at the end of the day. This causes inconvenience to others as they can’t review my code or build upon it. So, today I wrote a small script which syncs all my git repositories with a remote server. Hope it helps you too :)

The setup consists of three files:

###core syncing script at ~/.scripts/sync-repos###

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

#!/usr/bin/env ruby
require 'rubygems'
require 'yaml'

#replace google.com with your git servers domain
`ping -c 1 google.com`
if $?.exitstatus != 0
puts 'UNABLE TO SYNC REPOS AS NW IS DOWN'
exit $?.exitstatus
end

puts 'syncing repositories'

@repos = YAML::load_file File.expand_path( '~/.sync-repos')

@repos.each do |repo|
path = File.expand_path repo[:path]
remotes = repo[:remotes].is_a?(String) ? [repo[:remotes]] : repo[:remotes]
unless File.exist? path
puts "skipping #{path} as directory not found"
next
end

remotes.each do |remote|
cmd = "cd #{path} && git push #{remote}"
puts "executing: '#{cmd}'"
system(cmd)
end
end

puts 'done syncing repositories'

###config file pointing to all the repos at ~/.sync-repos###

1
2
3
4
5
6
7
8
9
10

---
- :path: ~/repos/search
:remotes:
- origin
- :path: ~/repos/logbin
:remotes:
- origin
- local

###upstart shutdown trigger script at /etc/init/syncrepos.conf###

1
2
3
4
5

start on runlevel [06]

/bin/bash -l -c /home/minhajuddin/.scripts/sync-repos

how to setup solr and sunspot on a rails production server

Solr is an awesome app built on top of Lucene for fulltext search. However, setting it up can be a pain if you don’t find the right guide, or if you miss some small detail. So, here is my attempt to document the process of setting up solr in development and production using a rails app as an example.

Solr and Lucene are java apps, so you need java to get this stuff working, I installed sun-jdk just to play it safe, as far as I know it works well even with openjdk.

Steps to setup solr on production

  1. Install Sun JDK:
1
2
3
4
5
6
7
8

#install and setup sun jdk
echo "deb http://archive.canonical.com/ $(lsb_release -cs) partner"| sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt-get update
sudo apt-get install sun-java6-jre sun-java6-bin sun-java6-jdk -y
sudo update-alternatives --config java
echo 'export JAVA_HOME=/usr/lib/jvm/java-6-sun/' >> ~/.bashrc

  1. Download and setup tomcat: you’ll need to setup tomcat version 6.0 for your production server. Download the latest V6 tomcat files from http://tomcat.apache.org/download-60.cgi. Now, extract them into ~/apps.

  2. Download or build the solr war files and copy them to /apps/solr. You can find the links at: http://lucene.apache.org/solr/ or http://www.apache.org/dyn/closer.cgi/lucene/solr/. The war file is usually in a folder called dist and has a filename like apache-solr-3.4.0.war

  3. Create a file ~/apps/tomcat/conf/Catalina/localhost/solr-appname.xml with the following content

1
2
3
4
5
6
7
8
9
10
11
12
13


<?xml version="1.0" encoding="utf-8"?>
<!-- I usually create this file in the rails app config/ directory and symlink
it to the ~/tomcat/conf/Catalina/localhost/ directory-->
<!-- the docBase path should point to your solr.war file -->
<Context docBase="/home/minhajuddin/apps/solr/solr.war" debug="0" crossContext="true">
<!-- the value string should point to your apps solr directory -->
<Environment name="solr/home" type="java.lang.String" value="/home/minhajuddin/spikes/solr-blog/solr" override="true"/>
<!-- value= app-name/solr -->
</Context>


Steps till this point are the same for any kind of solr installation, be it for a rails or any other app.

  1. I use the sunspot_rails gem in my rails application, when using this, you can run rails g sunspot_rails:install to create a config/sunspot.yml file. Once you have the config file, change the production config values to point to the right port and path, e.g:
1
2
3
4
5
6
7
8

..
production:
solr:
hostname: localhost
port: 8080
path: '/solr-odir/'

  1. Run the bundle exec rake sunspot:solr:start command once, on the development machine, to generate the solr configuration files. And push this code to the production server.

That’s all. Setting solr is not very straightforward, but once you have it set up, it’s very easy to setup additional apps with the same solr server.

On a development machine, all you need to get solr working is: install java (check step 1) and setup sunspot (check step 5), and start the solr server with bundle exec rake sunspot:solr:start

###Resources###

simple log management and viewing for your servers

As a guy who develops, deploys and maintains webapps, I had to login to my servers and tail the logs to see issues too many times. It’s a real pain, And anybody who maintains any servers knows this.

I’ve recently found a bunch of very good apps, which make this job very pleasant: PaperTrail is an awesome app which makes it very simple to setup a logging daemon and view all your logs (from all your servers) on their website, It’s a very neat implementation. But, you might not want to send your logs to other apps as they usually have sensitive information. PaperTrail

logstash is another awesome open source implementation for log management. With logstash, you can setup a small central server which collects all your logs and allows you to access them through a variety of interfaces. Another advantage of logstash is that the logs stay on your server under your control, plus it’s open source. The only downside is the one time setup, which, is not that hard. It is very versatile in ways it allows you access to your logs. LogStash

If none of them seem to be your thing, here is a small script which I use to tail remote log files. It runs tail -f over an ssh connection. It’s very simple to setup and use. Once you set it up, you can just cd into your application directory and run rt and it will start tailing your log files instantly. If you have any improvements you can fork this gist and update it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82

#~/.scripts/rt
#!/usr/bin/env ruby
require 'rubygems'
require 'yaml'
require 'erb'

ConfigFilePath = File.expand_path("~/.remote-tail.yml")

#will be written out to ConfigFilePath if not present
SampleConfig=<<EOS
---
defaults: &defaults
host: c3
foonginx:
file: /var/log/nginx/*.log
barapp:
host: foo@bar.com
file: /var/www/apps/railsfooapp/shared/log/*.log
EOS

Usage=<<EOS
Usage:
1. cd to into the directory whose name is the same as the name of the config and run
rt

2. rt <name of the app>

3. rt <host> <file>
EOS

def tail(opts)
cmd = "ssh #{opts['host']} 'tail -f #{opts['file']}'"
puts "running: '#{cmd}'"
system(cmd)
end

def config(app)
puts "using app:#{app}"
config = YAML::load_file ConfigFilePath
return config[app] if config[app]
puts "app:#{app} not found in #{ConfigFilePath}"
puts Usage
exit 2
end

def setup
return if File.exist? ConfigFilePath
puts "creating a sample config at: #{ConfigFilePath}"
File.open(ConfigFilePath, 'w') do |f|
f.print SampleConfig
end
end

def init
setup
case ARGV.length
when 0
#usage:
#cd to the app root directory, usually this would be the name with which you
#setup the configuration and run
#$ rt
tail config(File.basename(Dir.pwd))
when 1
#usage:
#from any directory
#$ rt <name of the app>
tail config(ARGV.first)
when 2
#usage:
#from any directory
#$ rt <host> <file>
tail :host => ARGV.first, :file => ARGV.last
else
puts "Invalid number of arguments"
puts Usage
exit 1
end
end

init

how to change the rails root url based on the current user or role

In my latest rails app, I needed the root url to be different based on the logged in user, i.e. if the user was logged in I wanted to show one page, if not I wanted to show a generic page. Rails 3 makes this very easy.

While drawing routes, rails gives you ability to constrain the route based on anything in the incoming request. As it happens, I was using devise for my authentication needs and devise uses warden which fills up the request’s env with the current user, Once I had the current user it was a simple conditional statement was all that was needed to get my routes working. Checkout the below implementation to see how it’s done:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

#lib/role_constraint.rb

class RoleConstraint
def initialize(*roles)
@roles = roles
end

def matches?(request)
@roles.include? request.env['warden'].user.try(:role)
end
end

#config/routes.rb
root :to => 'admin#index', :constraints => RoleConstraint.new(:admin) #matches this route when the current user is an admin
root :to => 'sites#index', :constraints => RoleConstraint.new(:user) #matches this route when the current user is an user
root :to => 'home#index' #matches this route when the above two matches don't pass

mind stack, a stack of your thoughts and tasks

As a developers, we are always bombarded with information/tasks/thoughts/ideas. And at times, it’s very difficult to remember these things. On a lot of occasions, I start doing task X and in the middle of it, I remember that I need to “fix something urgently”, so I stop doing X and move to the Urgent task Y, when I am done with Y I have difficulty remembering what I was doing before that. This is just when I have two tasks, but the level of nesting can sometimes go a lot deeper.

That’s when I read a blog post(can’t remember where), which talked about saving your state of mind(on post-it notes or notebooks or whatever). And, it has helped me a lot. I also created a little bash script which helps me save my state of mind. I’ve been using it for a long time and it has served me well. I am posting it on github hoping that others may find it useful. You can check it out at Mind::Stack .

I also have the following line in my .xmobarrc so that I can see the top 3 tasks in my status bar.

1
2
3

, Run Com "/home/minhajuddin/.scripts/s" ["top"] "slotter" 600

Screenshot of my xmobar

Mindstack xmobar screenshot

backup mongodb databases to s3

Here are a bunch of scripts which can be used to backup your mongodb database files to S3.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

#mongodbbak
#!/usr/bin/env ruby
require 'rubygems'
require 'aws/s3'
require 'pony'

#run export
Dir.chdir("#{ENV['HOME']}/archives/mongodb/")
puts 'dumping..'
`mongodump`

#zip
puts 'compressing..'
hostname = `hostname`.chomp
file = "mongodb.#{hostname}.#{Time.now.strftime "%Y%m%d%H%M%S"}.tar.gz"
md5file = "#{file}.md5sum"
`tar cf - dump --remove-files| gzip > #{file}`
`md5sum #{file} > #{md5file}`

#copy
puts "copying to #{file} s3.."
AWS::S3::Base.establish_connection!(
:access_key_id => ENV['AMAZON_ACCESS_KEY_ID'],
:secret_access_key => ENV['AMAZON_SECRET_ACCESS_KEY']
)
AWS::S3::S3Object.store(file, open(file), 'cvbak')
puts "#{file} uploaded"
#create md5checksum
AWS::S3::S3Object.store(md5file, open(md5file), 'cvbak')
puts "#{md5file} uploaded"

#message
Pony.mail(:to => 'min@mailinator.com', :from => 'sysadmin@mailinator.com.com', :subject => "[sys] db on #{hostname} backed up to #{file}", :body => "mongodb database on #{hostname} has been successfully backed up to #{file}")
puts 'done'

1
2
3
4

#crontab -l
@daily /bin/bash -i -l -c '/home/ubuntu/repos/server_config/scripts/mongodbbak' >> /tmp/mongobak.log 2>&1

1
2
3
4
5

#~/.bashrc
export AMAZON_ACCESS_KEY_ID='mykey'
export AMAZON_SECRET_ACCESS_KEY='mysecret'

easily show current version number of your app, stackoverflow style

When your app is deployed in multiple environments (staging, production), knowing the version number of your deployed app helps a lot, in debugging. Stackoverflow does a great job of showing a meaningful version in it’s footer. Currently it shows that it’s version number as rev 2011.7.22.2. This tells us that the code running stackoverflow was last updated on 2011.7.22, and that it was updated twice on that same day.

You can set up a similar thing pretty easily if you are using git and rails (rails in not really needed, but my example shows it using rails). All you need to do is add the following line to your config/application.rb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

#config/application.rb
module Khalid
class Application < Rails::Application
.
.
.
#cache the version as long as the app is alive
#2011.07.25.4c76f53
VERSION =`git --git-dir="#{Rails.root.join(".git")}" --work-tree="#{Rails.root}" log -1 --date=short --format="%ad-%h"|sed 's/-/./g'`
.
.
.
end
end

This gives you a constant called Khalid::Application::VERSION which will give you a nice version number containing the commit sha id and the date like this: 2011.07.25.4c76f53

joy of using linux

I’ve been using linux for a long time now, and I have been loving it since the first day. However when someone asks me Why linux? I find it difficult to explain. And then, I give up and fallback to Linux is free (as in free beer). Well, I have had a series of fortunate events yesterday, which (I hope) will explain my love for linux.

So, yesterday was a special day for me, we launched a webapp specifically designed to publish results of different exams (You can check it out at Ramanujan Result Engine). To publish the results we (me and my colleague Nagaraju) had to go the university at 08:30 in the morning to get the CDs which had the results’ data. Now, we were given the format of the data (just the column names) one day in advance. And we tested our data importing utility assuming that the data would be a simple CSV file. And as soon as the CDs were handed out we tried to extract the data, but it turned out that the zipped files which had the data were password protected and that some minister would release the password at 09:00. At this point I checked if I could unzip the data through the command prompt (I was just thinking of creating a script which would automate the unzipping and uploading once the password was given). That’s when a good and a bad thing happened, when we entered the wrong password it didn’t extract the files but it printed a messages telling us which files it was unable to extract. We were happy that we atleast knew the directory structure and filenames and that we could just write a script to automate everything, but to our horror we found out that the university folks had given us the data as mdb files. I was stumped, I thought I had to get to a windows machine with microsoft access on it and then export it to a csv file and then run it through our uploader, this seemed like a long process which would atleast take a few hours.

As a last resort, I googled for mdb to csv conversion to see if someone knew a way to do it on linux, and voila I found an awesome page which documented the use of mdb-export which looked like it would do our job. I was stoked with happiness. Here we were trying to be the first guys who could get the results online, disillusioned by the format of data given to us and in no time we found a way back into the game, thanks to the awesome linux community.

I quickly ran sudo apt-get install mdb-export on the terminal, I got a message no package found, Hmm, I thought it might have already been installed (by the ubuntu default packages) and ran mdb-export and ubuntu says please install mdbtools, Sweet :). Ran sudo apt-get install mdbtools, read the manpages, found out that it needed to know the name of the table to export. Googled again, found out that mdb-tables (which is part of mdb-tools) would give me the table names. Now, finally when the password was released by the minister, I did my mdb-export dance, and uploaded the results to our servers.

It is an experience I’ll cherish for a long time. I am sure we were the first guys who published the EAMCET results (we did it within 5 minutes). Other linux users may just give a big meh to this incident, but I am sure you have many such small incidents where the uber awesome linux community helped you out. One of the biggest things that I love about linux is it’s community. Linux wouldn’t have been what it is without it’s community, and I am forever indebted to it.

linux awesomeness

A few minutes ago I learnt that adding linenos to the end of the highlight tag in jekyll adds line numbers to a code snippet. I wanted to add this to all the hightlight blocks in my blog. I came up with this command:

1
2
3
4
5
6

grep highlight _posts/*.* \
| awk -F : ' { print $1}' \
| uniq \
| xargs sed -i 's/% *highlight *\([a-z]*\) *%/% highlight \1 linenos %/'

painless dotfiles synchronization and versioning using git

Synchronization of dotfiles has been documented to death in many blogs. But, I just wanted to show how I do it:

  1. Initialize (git init) a repository called dotfiles anywhere on your filesystem (I do it in ~/repos/core/).
  2. Copy your dotfiles to this directory, and arrange them in whatever order you want and add them to your git repository.
  3. Now, tweak the line#2 in the following script to include your dotfiles or folders and run it.
  4. Now, to sync your dotfiles seamlessly between your machines. You just need a remote git server (It can be your own using gitosis , or you can put it on github ). I use github to store my dotfiles. Just push your dotfiles to your remote server clone it on another machine run the setup.rb and bam, you’re in business.
1
2
3
4
5
6
7
8
9
10
11
12

#!/bin/env ruby
dirs = %w(bash/.bash_aliases bash/.inputrc git/.gitconfig vim/.vim vim/.vimrc .gemrc)
current_dir = File.expand_path(Dir.pwd)
home_dir = File.expand_path("~")

dirs.each do |dir|
dir = File.join(current_dir, dir)
symlink = File.join(home_dir, File.basename(dir))
`ln -ns #{dir} #{symlink}`
end

This way you don’t have to copy/paste the files when you make changes to them. Just change the file, commit and push, and pull it on the other machines.

how to get vanity urls in rails

Getting vanity urls to work in rails is very simple. Let’s say you want to allow your users to expose their Profiles through a facebook like url http://www.facebook.com/GreenDay. This is what you need to change in your routes file.

The except => :show on line#5 makes the resources helper skip the show route. And line#9 which should be at the end of the routes file, creates a route called profile which will be used for all the profile show links automatically. That’s it now your application has vanity urls, whenever someone clicks on a profile#show link they will be taken to /:slug. Obviously in this case the slug is assumed to be unique.

1
2
3
4
5
6
7
8
9
10
11
12

Funky::Application.routes.draw do
.
.
.
resources :profiles, :except => [:show]
.
.
#at the end of the routes file
get ':slug' => 'profiles#show', :as => 'profile'
end

###Update### One of the readers emailed me asking what the controller code would look like. Here it is:

1
2
3
4
5
6
7
8
9
10
11
12

#app/models/profile.rb
class Profile < ActiveRecord::Base
#should have a column called "slug"
end
#app/controllers/profiles_controller.rb
class ProfilesController < ApplicationController
def show
@profile = Profile.find_by_slug(params[:slug])
end
end

###References###

reminiscing my past

I consider myself a self-taught developer for the most part. And, free screencasts where the next best thing to having a trainer for me. The importance of screencasts gets lost once you get a grip on programming and development. But, in the initial days when you don’t know anything about programming, having a human voice guide you can make all the difference in the world.

Below is a list of screencasts/videos/books which helped me in my initial days of web development:

I haven’t talked about my past on my blog. But, I had a secure and high paying job at a big MNC called TCS from 2006 to 2008. I used to work as an ETL developer, designing jobs using a tool called Datastage. It was a niche skill and I was very good at it. The only problem was it didn’t give me an opportunity to program. Designing high performance ETL jobs was interesting but it never satisfied my hunger for programming. Anyway, I decided to leave my secure job and start my own gig around Aug 2008. My first programming job as a freelancer was fixing a slow performing sql query(btw, the problem it had was that it was doing a cartesian join instead of an inner join). I did it for $8(out of which I got $5, and the other $3 went to rentacoder.com), but the satisfaction and confidence it gave me was priceless. I have come a long way since then. I have developed atleast half a dozen high quality web applications since then. And I owe a lot of what I know to the gentlemen who devoted their time in spreading knowledge through screencasts, blog posts and books.

I am planning to do a series of screencasts to look at ruby on rails development using ubuntu and post them on vimeo, hoping to help beginners. I’ll try and post atleast one video every week.

ruby on rails soup to nuts

Ruby on Rails Soup to Nuts

This is the first in a set of screencasts on ruby on rails. The purpose of these screencasts is to teach you ruby on rails from the ground up using an ubuntu dev environment.

Here is the first one:

Notes for Session 1: Setup an ubuntu machine with the basic software

  1. Download and install Ubuntu 10.10
  2. Install Ruby and Rails via Rails Ready
  3. Setup gvim (which will be our primary editor)
  4. Importance of the terminal https://help.ubuntu.com/community/UsingTheTerminal
  5. Install rvm using railsready
    url:     https://github.com/joshfng/railsready
    command: wget --no-check-certificate https://github.com/joshfng/railsready/raw/master/railsready.sh && bash railsready.sh
    
  6. Configure rvm and install ruby
1
2
3
4
5
6
7
8
9

#replace the following which is usually found on line 6 of ~/.bashrc
[ -z "$PS1" ] && return
# with
if [[ ! -z "$PS1" ]] ; then
# also put the following line at the end of the file
[[ -s $HOME/.rvm/scripts/rvm ]] && source $HOME/.rvm/scripts/rvm
fi

  1. Install rails: gem install rails

Next screencast:

How to build a simple rails application Things that will make it easy for you to follow the next session: html, http, basics of ruby programming, basics of css

how to hookup nginx with startssl

This is a note-to-self

StartSSL is a certification authority which gives away free SSL certificates valid for one year (after which you can renew it again for free). They are simply awesome. Anyway, this blog post documents how you can setup an ssl cert on an nginx server using the start ssl free cert.

  • Signup for a StartSSL account . StartSSL doesn’t give you a username and password, it gives you a client certificate instead (Use firefox to do signup). Make sure to back up the client cert.
  • SSH into your server and run the following commands:
1
2
3
4
5

openssl genrsa -des3 -out server.key.secure 2048
openssl rsa -in server.key.secure -out server.key
openssl req -new -key server.key -out server.csr

  • On startssl browse to the control panel and then to the validations wizard and validate the domain for which you want to generate your ssl.
  • Now go to the certificates wizard tab in the control panel and create a web server ssl certificate. Skip the first step and paste your server.csr file in the next step. Finish the rest of the steps of this wizard.
  • Browse to the tool box in the control panel and click on retrieve certificate. Copy your certificate and paste it into a file called server.crt on the server.
  • Download sub.class1.server.ca.pem to your server.
  • Now run cat sub.class1.server.ca.pem >> server.crt to append the intermediate certificate to your cert.
  • Run the commands:
1
2
3
4

sudo cp server.crt /etc/ssl/example.com.crt
sudo cp server.key /etc/ssl/example.com.key

  • Change your nginx conf to:
1
2
3
4
5
6
7
8
9
10
11
12
13

server {
.
.
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/ssl/example.com.crt;
ssl_certificate_key /etc/ssl/example.com.key;
.
.
}

  • Restart your nginx server

###References###

pagination for performance intensive queries using nhibernate and sql server temporary tables

Pagination is a solved problem. A simple google search shows up 11,200,000 results. Whoa! The basic implementation is simple. To paginate a result set you run two almost identical queries, one which fetches the count of the result set. The other which skips and takes the desired slice from the result set.

This is fine in most of the cases, But, when your query is very performance intensive, you just can’t afford to run the same query twice. I ran into a similar situation recently and was searching for a decent approach to solve this problem, and then, I bumped into the awesome Temporary tables in SQL Server. Once I knew about them the solution became very simple. It still needs execution of two queries but it doesn’t run the performance intesive query twice. See for yourself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

-- first query
SELECT pi.*
INTO #TempPerfIntensiveTable
FROM
.. a 100 JOINS or SPATIAL FUNCTIONS ...;

SELECT COUNT(*)
FROM
INTO #TempPerfIntensiveTable;

-- end of first query

-- second query
-- :skip and :take are sql parameters for pagination
SELECT pr.* FROM
(SELECT tp.*, ROW_NUMBER() OVER(ORDER BY tp.Id) AS ROWNUM
FROM #TempPerfIntensiveTable tp)pr
WHERE pr.ROWNUM BETWEEN :skip AND :take + :skip;
-- end of second query


These queries need to be executed by calling session.CreateSQLQuery(query).SetInt32..... This stuff is not specific to NHibernate, I just put it out there to help future searchers :)