Algorithm to compute downtime of a service/server

I am working on an open source side project called webmonitorhq.com It notifies you when your service goes down. It also stores the events when a service goes down and comes back up. And I wanted to show the uptime of a service for a duration of 24 hours, 7 days etc,.

This is the algorithm I came up with, Please point out any improvements that can be made to it. I’d love to hear them.

The prerequisite to this algorithm is that you have data for the UP events and the DOWN events

I have a table called events with an event string and an event_at datetime

events
id
event (UP or DOWN)
event_at (datetime of event occurence)

Algorithm to calculate downtime

  1. Decide the duration (24 hours, 7 days, 30 days)
  2. Select all the events in that duration
  3. Add an UP event at the end of the duration
  4. Add a inverse of the first event at the beginning of this duration e.g. if the first event is an UP add a DOWN and vice versa
  5. Start from the first UP event after a DOWN event and subtract the DOWN event_at from the UP event_at, do this till you reach the end. This gives you the downtime
  6. Subtract duration from downtime to get uptime duration

e.g.

  1. 24 hour duration. Current Time is 00hours
  2. UPat1 DOWNat5 UPat10
  3. UPat1 DOWNat5 UPat10 UPat24
  4. DOWNat0 UPat1 DOWNat5 UPat10 UPat24
  5. UPat1 - DOWNat0 + UPat10 - DOWNat5 Downtime = 1 + 5
  6. 24 - 6 => 18

Elixir IO.inspect to debug pipelines

While writing multiple pipelines, you may want to debug the intermediate values. Just insert |> IO.inspect between your pipelines.

e.g. in the expression below:

1
2
3
4
:crypto.strong_rand_bytes(32)
|> :base64.encode_to_string
|> :base64.decode
|> :base64.encode

If we want to check intermediate values we just need to add a |> IO.inspect

1
2
3
4
5
6
7
8
:crypto.strong_rand_bytes(32)
|> IO.inspect
|> :base64.encode_to_string
|> IO.inspect
|> :base64.decode
|> IO.inspect
|> :base64.encode
|> IO.inspect

This will print all the intermediate values to the STDOUT. IO.inspect is a function which prints the input and returns it.

How to store temporary data and share it with your background processor

In my current project, I had to store some temporary data for a user and let a few background processors have access to it. I wrote something small with a dependency on Redis which does the job.

It allows me to use current_user.tmp[:token_id] = "somedata here" and then access it in the background processor using user.tmp[:token_id] which I think is pretty neat.

Moreover, since my use case needed this for temporary storage, I set it to auto expire in 1 day. If yours is different you could change that piece of code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# /app/models/user_tmp.rb
class UserTmp
attr_reader :user
def initialize(user)
@user = user
end
EXPIRATION_SECONDS = 1.day
SERIALIZER = Marshal
def [](key)
serialized_val = Redis.current.get(namespaced_key(key))
SERIALIZER.load(serialized_val) if serialized_val
end
def []=(key, val)
serialized_val = SERIALIZER.dump(val)
Redis.current.setex(namespaced_key(key), EXPIRATION_SECONDS, serialized_val)
end
private
def namespaced_key(key)
"u:#{user.id}:#{key}"
end
end

And here is the user class

1
2
3
4
5
6
7
8
9
10
# /app/models/user.rb
class User < ActiveRecord::Base
#...
def tmp
@tmp ||= UserTmp.new(self)
end
#...
end

Hope you find it useful :)

Subdomains to restrict from your SaaS app

Many SaaS apps allow users to host their websites under their root domain, e.g. GitHub allows you to host your sites using GitHub Pages under the .github.io domain.

Here is a list of subdomains which you should reserve while building your own SaaS product.

I usually put this data in a /data/reserved_subdomains file and then use it like below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class Site < ActiveRecord::Base
#...
# validations
SUBDOMAIN_RX = /\A[a-z\d]+(-[a-z\d]+)*\Z/i
validates :subdomain, presence: true,
uniqueness: true,
length: { in: 4..63 , unless: Proc.new{ user && user.admin? }},
format: {:with => SUBDOMAIN_RX},
exclusion: { in: File.read(Rails.root.join("./data/reserved_subdomains")).each_line.map{|x| x.strip} , unless: Proc.new{ user && user.admin? }}
#...
end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
about
abuse
access
account
accounts
address
admanager
admin
admindashboard
administration
administrator
administrators
admins
adsense
adult
advertising
adwords
affiliate
affiliates
ajax
analytics
android
anon
anonymous
api1
api2
api3
apps
archive
assets
assets1
assets2
assets3
assets4
assets5
atom
auth
authentication
avatar
backup
banner
banners
beta
billing
billings
blog
blogs
board
bots
business
cache
cadastro
calendar
campaign
careers
chat
client
cliente
clients
cname
code
comercial
community
compare
compras
config
connect
contact
contest
copyright
cpanel
create
css1
css2
css3
dashboard
data
delete
demo
design
designer
devel
developer
developers
development
directory
docs
domain
donate
download
downloads
ecommerce
edit
editor
email
e-mail
example
favorite
feed
feedback
feeds
file
files
flog
follow
forum
forums
free
gadget
gadgets
games
gettingstarted
graph
graphs
group
groups
guest
help
home
homepage
host
hosting
hostmaster
hostname
html
http
httpd
https
image
images
imap
img1
img2
img3
inbox
index
indice
info
information
intranet
invite
invoice
invoices
ipad
iphone
jabber
jars
java
javascript
jobs
knowledgebase
launchpad
legal
list
lists
login
logout
logs
mail
mail1
mail2
mail3
mail4
mail5
mailer
mailing
main
manage
manager
marketing
master
media
message
messages
messenger
microblog
microblogs
mine
mobile
movie
movies
music
musicas
mysql
name
named
network
networks
news
newsite
newsletter
nick
nickname
notes
noticias
official
online
operator
order
orders
page
pager
pages
panel
partner
partnerpage
partners
password
payment
payments
perl
photo
photoalbum
photos
pics
picture
pictures
plugin
plugins
policy
pop3
popular
portal
post
postfix
postmaster
posts
press
privacy
private
profile
project
projects
promo
public
python
random
redirect
register
registration
resolver
root
ruby
sale
sales
sample
samples
sandbox
script
scripts
search
secure
security
send
server
servers
service
setting
settings
setup
shop
signin
signup
site
sitemap
sitenews
sites
smtp
soporte
sorry
staff
stage
staging
start
stat
static
statistics
stats
status
store
stores
subdomain
subscribe
suporte
support
survey
surveys
system
tablet
tablets
talk
task
tasks
teams
tech
telnet
test
test1
test2
test3
teste
tests
theme
themes
todo
tools
trac
translate
update
upload
uploads
usage
user
username
usernames
users
usuario
validation
validations
vendas
video
videos
visitor
webdisk
webmail
webmaster
website
websites
whois
wiki
workshop
www1
www2
www3
www4
www5
www6
www7
wwws
wwww
yourdomain
yourname
yoursite
yourusername

Script to cleanup old directories on a linux server

Here is a simple script which can cleanup directories older than x days on your server It is useful for freeing up space by removing temporary directories on your server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/bash
# usage
# # deletes dirs inside /opt/builds which are older than 3 days
# delete-old-dirs.sh /opt/builds 3
# cron entry to run this every hour
# 0 * * * * /usr/bin/delete-old-dirs.sh /opt/builds 2 >> /tmp/delete.log 2>&1
# cron entry to run this every day
# 0 0 * * * /usr/bin/delete-old-dirs.sh /opt/builds 2 >> /tmp/delete.log 2>&1
if ! [ $# -eq 2 ]
then
cat <<EOS
Invalid arguments
Usage:
delete-old-dirs.sh /root/directory/to-look/for-temp-dirs days-since-last-modification
e.g. > delete-old-dirs.sh /opt/builds 3
EOS
exit 1
fi
root=$1
ctime=$2
for dir in $(find $root -mindepth 1 -maxdepth 1 -type d -ctime +"$ctime")
do
# --format %n:filename, %A:access rights, %G:Group name of owner, %g: Group ID of owner, %U: User name of owner, %u: User ID of owner, %y: time of last data modification
echo "removing: $(stat --format="%n %A %G(%g) %U(%u) %y" "$dir")"
rm -rf "$dir"
done

Put this in your code to debug anything

Aaron Patterson wrote a very nice article on how he does deubgging.

Here is some more code to make your debugging easier.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Object
def dbg
self.tap do |x|
puts ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
puts x
puts "<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<"
end
end
end
# now you can turn the following:
get_csv.find{|x| x[id_column] == row_id}
# into =>
get_csv.dbg.find{|x| x[id_column.dbg] == row_id.dbg}.dbg

Update:

Josh Cheek has taken this to the 11th level here: https://gist.github.com/JoshCheek/55b53e2faa2776d6a054#file-ss-png Awesome stuff :)

How to open the most recent file created in Vim

When working with Static Site Blogs, you end up creating files with very long names for your blog posts. For example, this very post has a filename source/_posts/how-to-open-the-most-recent-file-created-in-vim.md.

Now finding this exact file in hundreds of other files and opening them is a pain. Here is a small script which I wrote by piecing together stuff from the internet.

1
2
3
4
5
6
7
8
9
# takes 1 argument
function latest(){
# finding latest file from the directory and ignoring hidden files
echo $(find $1 -type f -printf "%T@|%p\n" | sort -n | grep -Ev '^\.|/\.' | tail -n 1 | cut -d '|' -f2)
}
function openlatest(){
${EDITOR-vim} "$(latest $1)"
}

Now, I can just run openlatest source to open up the file source/_posts/how-to-open-the-most-recent-file-created-in-vim.md in vim and start writing.

This technique can also be used to open the latest rails migration. Hope, this function finds a home in your ~/.bashrc :)

Script your tmux to maximize awesome!

tmux is an awesome terminal multiplexer. I have been an Xmonad user about 4 years, and everytime I heard about tmux in the past I used to think that my window manager was powerful and I din’t need another terminal manager. But, tmux is much more than that.

If you spend a lot of time on your terminal, I urge you to take some time to learn tmux, you’ll be surprised by it. Anyway, the point of this post is to show you its scriptability.

I hacked together the following script from various sources online.

This script is to manage my workspace for Zammu(Zammu an awesome continuous delivery app that I am currently working on, Go check it out at https://zammu.in/). Zammu is a rails app, it is architected to use a bunch of microservices, so to start any meaningful work I need to fire up those agents too. Doing this manually is very tedious, with tmux I have one command to do it:

I just run tmz and it does the following:

  1. Opens my editor with my TODO file in the first window.
  2. Opens a pry console in the second window.
  3. Creates a split pane in the second window with a bash terminal, also runs the git log command, git status command and launches a browser with my server’s url.
  4. Creates a third window with rails server in the first pane, sidekiq in the second pane, foreman start in the third pane which starts all the agents and a guard agent for livereload in a tiny 2 line pane.
  5. Finally it switches to the first window and puts me in my editor.

This has been saving me a lot of time, I hope you find it useful.

I have similar workspace setter uppers for my communication (mutt, rainbowstream, irssi) and other projects.

I just ran the command history | grep '2016-02-17' | wc and it gave me 591 3066 23269 That is 591 commands in 3066 words and 23269 characters and that’s just the terminal. Do yourself a favor and use tmux.

I have also created a short screencast for it, check it out.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/bin/bash
#filepath: ~/bin/tmz
SESSION_NAME='zammu'
ROOT_DIR="$HOME/r/webcore/web"
tmux has-session -t ${SESSION_NAME}
# open these only if we don't already have a session
# if we do just attach to that session
if [ $? != 0 ]
then
# -n => name of window
tmux new-session -d -s ${SESSION_NAME} -c ${ROOT_DIR} -n src
# 0 full-window with vim
tmux send-keys -t ${SESSION_NAME} "vim TODO" C-m
# - - - - - - - - - - - - - - - - - - - -
# 1 pry+terminal
tmux new-window -n pry -t ${SESSION_NAME} -c ${ROOT_DIR}
# >> pry
tmux send-keys -t ${SESSION_NAME}:1 'bundle exec rails console' C-m
# >> terminal 1index window 1index pane => 1.1
tmux split-window -h -t ${SESSION_NAME}:1 -c ${ROOT_DIR}
tmux send-keys -t ${SESSION_NAME}:1.1 '(/usr/bin/chromium-browser http://localhost:3000/ &> /dev/null &);git ll;git s' C-m
# - - - - - - - - - - - - - - - - - - - -
# 1 server+logs
tmux new-window -n server -t ${SESSION_NAME} -c ${ROOT_DIR}
# >> server
tmux send-keys -t ${SESSION_NAME}:2 'bundle exec rails server' C-m
# >> sidekiq
tmux split-window -v -t ${SESSION_NAME}:2 -c ${ROOT_DIR}
tmux send-keys -t ${SESSION_NAME}:2.1 'bundle exec sidekiq' C-m
# >> agents
tmux split-window -v -t ${SESSION_NAME}:2 -c "${ROOT_DIR}/.."
tmux send-keys -t ${SESSION_NAME}:2.2 'foreman start' C-m
# >> guard
tmux split-window -v -t ${SESSION_NAME}:2 -c "${ROOT_DIR}" -l 1
tmux send-keys -t ${SESSION_NAME}:2.3 'guard --debug --no-interactions' C-m
# - - - - - - - - - - - - - - - - - - - -
# start out on the first window when we attach
tmux select-window -t ${SESSION_NAME}:0
fi
tmux attach-session -t ${SESSION_NAME}

A very simple environment loader for ruby

There are many gems which do app configuration loading for ruby. However, you don’t really need a gem to do environment loading. Here is a snippet of code which does most of what you want.

1
2
3
4
5
6
7
class EnvLoader
def load(path)
YAML.load_file(path).each do |k, v|
ENV[k] = v.to_s
end
end
end

And put this at the top of your application

1
2
require_relative '../app/classes/env_loader.rb'
EnvLoader.new.load(File.expand_path('../../env', __FILE__))

Here are some specs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# specs for it
require 'rails_helper'
describe EnvLoader do
describe '#load' do
it 'imports stuff into ENV' do
temp = "/tmp/#{Time.now.to_i}"
File.write(temp, <<-EOS.strip_heredoc)
SECRET: This is awesome
FOO: 33
EOS
EnvLoader.new.load(temp)
expect(ENV['FOO']).to eq('33')
expect(ENV['SECRET']).to eq("This is awesome")
end
end
end

How to fix guard crashing your tmux server

Guard is an awesome rubygem which allows livereload among other things. However, when I run guard in tmux it was crashing all my tmux sessions. I guess that is because I am using Tmux 2.2 and Guard tries to use Tmux notifications for notifying about stuff. So, an easy way to fix this problem is to use libnotify for notifications. Just add this line to your Guardfile and you should be good.

1
notification :libnotify

Stop using Heroku to host static sites

I see many posts on the internet about running static sites using the development server on heroku.

This is a bad practice, This goes completely opposite to what static site generators are. Static site generators are meant to spit out the required HTML to run it from any basic webserver/webhost. Also, there is Github Pages which is an excellent host which provides hosting for static content. Heck, it even supports building of websites automatically using the Jekyll static site generator.

The servers which come bundled with the static site generators are a conveneince to test your site locally and not something to be hosted on a production server.

If you are a figure with a big following, please don’t propagate bad practices. It may seem like a fun/clever exercise for you, but it in the end it sends the wrong message.

P.S: I am building an Automatic Deployment Solution which can build and deploy websites to Github Pages, it supports Hugo, Jekyll, Middleman, Octopress and Hexo. I would love to hear your feedback on it.

Removing duplication in ERB views using content_for

While writing code to show themes in Zammu, I had to show the same button in two places on the same page. The easy way is to duplicate the code. But that causes problems with maintainability.

e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<%= content_for :secondary_nav do %>
<!-- <<<<<<<<<<<<<<<<<<<<<<<<< FIRST COPY -->
<%= form_tag("/") do %>
<button class='btn btn-lg btn-primary push-top-10'>Looks good, let's clone this</button>
<% end %>
<% end %>
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
...
</div>
</div>
<div class="col-md-8">
<dl>....</dl>
<!-- <<<<<<<<<<<<<<<<<<<<<<<<< SECOND COPY -->
<%= form_tag("/") do %>
<button class='btn btn-lg btn-primary push-top-10'>Looks good, let's clone this</button>
<% end %>
</div>
</div>

To remove duplication I just used a content_for and captured the code that had to be duplicated and used yield to spit it out in the two places. The changed code is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<%= content_for :clone_btn do %>
<%= form_tag("/") do %>
<button class='btn btn-lg btn-primary push-top-10'>Looks good, let's clone this</button>
<% end %>
<% end %>
<%= content_for :secondary_nav do %>
<!-- <<<<<<<<<<<<<<<<<<<<<<<<< FIRST COPY -->
<%= yield(:clone_btn) %>
<% end %>
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
...
</div>
</div>
<div class="col-md-8">
<dl>....</dl>
<!-- <<<<<<<<<<<<<<<<<<<<<<<<< SECOND COPY -->
<%= yield(:clone_btn) %>
</div>
</div>

Now if I have to change the button, I have to do it only in one place. Our code is DRY as a bone :)

Lets build a dumb static site generator

Static Site Generators are awesome because of their speed and robustness.

There are many static site generators.

However, understanding how to use them is not very straightforward to new users. Let us try to build a simple static site generator to better understand the problem.

The problems with managing websites are the issues of publishing, duplication and maintenance. If your website has multiple web pages, then more than 70% of the structure between the pages is the same. This includes the styling, header, footer, navigation. If you write the html for your pages manually, things become difficult when you need to make changes. That is why we have static generators to make things more maintainable.

The simplest way to build our generator would be to put the common stuff in one file and the changing content in other files.

For our example we’ll put the common markup in a file called layout.html and the page specific content in their own pages in a pages folder.

So we are looking for something like below:

1
2
3
4
5
.
├── layout.html
└── pages
├── about.html
└── index.html

Now with the structure out of the way, we need to decide how we are going to notate the ‘changeable area’ or ‘placeholders’ in the layout. I am using a dumb way to notate placeholder, we’ll use _PAGE_TITLE for the title and _PAGE_CONTENT for the page’s content. So our layout looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
# layout.html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>_PAGE_TITLE</title>
</head>
<body>
_PAGE_CONTENT
</body>
</html>

We can now replace these placeholders with the custom content from pages.

Our index page from our example site looks like below:

1
2
3
4
# pages/index.html
<h1>Welcome to our home</h1>
<p>This is an awesome site</p>

Now, to finally build the website, we need to do the following:

  1. Read the layout.html file.
  2. Read all the individual pages from the pages folder
  3. For every page replace the placeholders in the layout page and write it out to public/page-title.html

Here is our final script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/usr/bin/env ruby
require 'fileutils'
# this generates a static site into a public folder for the current directory
# create the folder
FileUtils.mkdir_p "public"
# read the layout
layout = File.read("layout.html")
# read the pages
Dir["pages/*html"].each do |page_filepath|
page = File.read(page_filepath)
# replace the page title and page content
title = File.basename(page_filepath) # we'll use the filename as the title
rendered_page = layout.gsub("_PAGE_TITLE", title)
rendered_page = rendered_page.gsub("_PAGE_CONTENT", page)
# write it out
File.write("public/#{title}", rendered_page)
puts "generated #{title}"
end
puts "DONE"

By, the way I am building an Automatic Deployment solution which can build and deploy Hugo, Hexo, Middleman and Octopress sites to Github pages

I created a small asciicast too, you can watch it below:

A bash script to replace gtimelog for the terminal

I have been using this script to log my time for a long time, thought I would share it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Usage:
# log time
# $ gl browsing redding again
# $ gl finished Hugo recipe for zammu.in
#
# check log
# $ gl
#
# check last 2 logs
# $ gl t -n2
#
# edit the timelog file
# $ gl e
function gl() {
gtimelog=~/timelog.txt
[ $# -eq 0 ] && tail $gtimelog $2 && return
case $1 in
t|c) tail $gtimelog $2
;;
a) echo "$(date "+%Y-%m-%d %H:%M"): $(tail -1 $gtimelog | sed -e 's/^[0-9 :-]*//g')" >> $gtimelog
;;
e) vi $gtimelog
;;
*) echo "$(date "+%Y-%m-%d %H:%M"): ${@/jj/**}" >> $gtimelog
;;
esac
}

If you have an API make it curlable

These days APIs are everythere which is a good thing. However, many APIs are very tedious. You can tell if your API is easy to use by looking at how simple it is to curl it.

Take an example of the below API call, it is from a stripe blog post demonstrating their use of ACH payments. See how easy it is to read and understand the call? Why can’t all APIs be like this?

1
2
3
4
5
6
7
curl https://api.stripe.com/v1/charges \
-u sk_test_BQokikJOvBiI2HlWgH4olfQ2: \
-d amount=250000 \
-d currency=usd \
-d description="Corp Site License 2016" \
-d customer=cus_7hyNnNEjxYuJOE \
-d source=ba_17SYQs2eZvKYlo2CcV8BfFGz

Anyway, if you are trying to design an API, Please, for the love of all that is holy make it curlable

How to get a git archive including submodules

Here is a small script I wrote to get a git archive of your repository containing all the submodules with the root repository.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#!/bin/bash
#
# Author Khaja Minhajuddin
# File name: git-archive-all
# cd root-git-repo; git-archive-all
set -e
set -C # noclobber
echo "> creating root archive"
export ROOT_ARCHIVE_DIR="$(pwd)"
# create root archive
git archive --verbose --prefix "repo/" --format "tar" --output "$ROOT_ARCHIVE_DIR/repo-output.tar" "master"
echo "> appending submodule archives"
# for each of git submodules append to the root archive
git submodule foreach --recursive 'git archive --verbose --prefix=repo/$path/ --format tar master --output $ROOT_ARCHIVE_DIR/repo-output-sub-$sha1.tar'
if [[ $(ls repo-output-sub*.tar | wc -l) != 0 ]]; then
# combine all archives into one tar
echo "> combining all tars"
tar --concatenate --file repo-output.tar repo-output-sub*.tar
# remove sub tars
echo "> removing all sub tars"
rm -rf repo-output-sub*.tar
fi
# gzip the tar
echo "> gzipping final tar"
gzip --force --verbose repo-output.tar
echo "> moving output file to $OUTPUT_FILE"
mv repo-output.tar.gz $OUTPUT_FILE
echo "> git-archive-all done"