Ylan Segal

Book Review: Architecting the Cloud

Architecting The Cloud. Design decisions for cloud computing service models, by Michael J. Kavis describes cloud computing in general and the different service models that are prevalent today in particular. It explores the differences and trade-offs between Software as a service (SaaS), Platform as a service (PaaS) and Infrastructure as a service (IaaS). I consider the book a good introduction to considerations for cloud computing for those that are used to more traditional data-center deployments.

The author covers a section on worst practices: Things that do not translate well when moving to the cloud and recommendations on how to avoid them. I found the most useful chapter to be the one on disaster recovery: A good overview of different strategies to become fault-tolerant in the cloud and embracing resiliency.

The REPL: Issue 9 - April 2015

Does Organization Matter?

Uncle Bob makes a useful analogy about code organization and physical organization of say, your desk or a library. Organization matter. Sometimes, all we need is a small amount of organization, sometimes we need the Dewy Decimal System

Why (and How) I Wrote My Academic Book in Plain Text

Most developers appreciate the benefits of plain text files since they play so well with other tools, like source control, grep, find, etc. W. Caleb McDaniel makes a great case for using plain text other than for programing code. In his case, he composes his academic writing in plain text and uses open source tools at the end to convert them to industry-standard proprietary formats. Awesome.

The Quality Wheel

A big part of effective communication is sharing the same terminology. It helps with context and allows us to be more specific. Jessitron proposes expanding our vocabulary around what “Quality Software” means. Instead of saying a piece of code is “good” or “clean”, how about it’s “configurable” and “readable”.

Adding an Index to Mongo Can Change Query Results

While trying to optimize some slow queries in a MongoDB database, I found an unexpected and concerning surprise: Adding an index can alter the results returned by a query against the same dataset.

Demonstration

Supose we have a collection that looks like this (All samples from a mongo shell):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
> db.example.find()
{
  "_id" : ObjectId("5542ef97b08a749f8e8e4f0d"),
  "title" : "Pink Floyd",
  "rating" : 1
}
{
  "_id" : ObjectId("5542efa2b08a749f8e8e4f0e"),
  "title" : "Led Zeppelin",
  "rating" : 2
}
{
  "_id" : ObjectId("5542efb3b08a749f8e8e4f0f"),
  "title" : "Aerosmith",
  "rating" : null
}
{
  "_id" : ObjectId("5542efbab08a749f8e8e4f10"),
  "title" : "Metallica"
}

Note that some documents have a numeric rating, one has a null value and one does not have the field.

Suppose we query for all documents with a rating of 1 or null:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
> db.example.find({rating: { $in: [1, null]}})
{
  "_id" : ObjectId("5542ef97b08a749f8e8e4f0d"),
  "title" : "Pink Floyd",
  "rating" : 1
}
{
  "_id" : ObjectId("5542efb3b08a749f8e8e4f0f"),
  "title" : "Aerosmith",
  "rating" : null
}
{
  "_id" : ObjectId("5542efbab08a749f8e8e4f10"),
  "title" : "Metallica"
}

The Metallica document is returned, even though it does not have a rating field.

Suppose that we want to optimize this collection and now we add an index on the rating field and re-run our query:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
> db.example.ensureIndex({rating: 1}, {sparse: true})
{
  "createdCollectionAutomatically" : false,
  "numIndexesBefore" : 1,
  "numIndexesAfter" : 2,
  "ok" : 1
}
> db.example.find({rating: { $in: [1, null]}})
{
  "_id" : ObjectId("5542efb3b08a749f8e8e4f0f"),
  "title" : "Aerosmith",
  "rating" : null
}
{
  "_id" : ObjectId("5542ef97b08a749f8e8e4f0d"),
  "title" : "Pink Floyd",
  "rating" : 1
}

The Metallica document is gone. Surprised? I definetly was.

Thoughts

The behavior may seem a bit contrived, but I actually encountered it while trying to optimize a produciton database. This example just boils it down to something trivial to reproduce. I should mention that if the index is created without the sparse option, the results are correct. The sparse option allows saving space on the index itself, by only creating an entry for documents that have the field. A non-sparse index, creates a record for all documents and sets the value to null.

In my opinion, the above-described behavior is awful. It is up to the database engine to decide which index to use. A sparse index may be useful in less queries than a non-sparse index. However, my expectations of indexes is that they are all about performance and trading off disk space and insert time for query time. The existance of an index should never change the result set for the same query and dataset.

Recursion and Pattern Matching in Elixir

In order to teach myself Elixir, I have been working my way through Exercism.io, which is a set of practice coding exercises with mentorship from the community. All exercises have the tests written for you and it’s up to the user to write a passing implementation.

Being new to Elixir and functional programming, the exercises are a great way for me to learn about syntax, idiomatic code and functional programming patterns. One of exercises consists of re-implementing common list operations, like count, map and reduce.

Implementing Count With Recursion

The test that the implementation must pass looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
defmodule ListOpsTest do
  alias ListOps, as: L

  use ExUnit.Case, async: true

  test "count of empty list" do
    assert L.count([]) == 0
  end

  test "count of normal list" do
    assert L.count([1,3,5,7]) == 4
  end

  test "count of huge list" do
    assert L.count(Enum.to_list(1..1_000_000)) == 1_000_000
  end
end

My first implementation, looked like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
defmodule ListOps do
  def count(list) do
    count(0, list)
  end

  def count(acc, []) do
    acc
  end

  def count(acc, [_|tail]) do
    count(acc + 1, tail)
  end
end

First thing of note: count/21 is defined twice. This is part of the language provided functionality. In Java, method overloading required a different number of parameters which is how the dispatching picked the correct method at runtime. In Ruby, there can’t exist to method definitions in the same scope. In Elixir, the correct function is called at run-time depending on which pattern is matched.

On our first test, when L.count([]) is called, the count/1 function matches, because it only has one parameters. That function calls count(0, []). This will match the first count/2 definition, because it is being passed with an empty list. (Any acc will match). That in turn returns acc, which is 0, making the test pass.

For the second test, count/1 is matched, which ends up calling count(0, [1,3,5,7]). That call, matches the second count/2 definition, because it matches a list that is not empty2. That function call will call recursively, adding 1 to the accumulator each call, until the list is empty and the accumulator is returned.

The calls will look like:

1
2
3
4
5
6
count([1,3,5,7])
count(0, [1,3,5,7])
count(1, [3,5,7])
count(2, [5,7])
count(3, [5])
count(4, []) # Returns 4

Note that recursion and pattern matching have taken the place of conditionals or explicit loops in the code, as you would have in non-functional programming languages.

Implementing Count With Reduce

The same exercise asks to implement a reduce function that will run a generic function on each element of a list and pass the resulting accumulator. My implementation looks like this:

1
2
3
4
5
6
7
8
9
defmodule ListOps do
  def reduce([], acc, _fun) do
    acc
  end

  def reduce([head|tail], acc, fun) do
    reduce(tail, fun.(head, acc), fun)
  end
end

The same trick as before is used here, where matching on an empty list returns the accumulator. When a list has at list one member, the function is called for that member and reduce/3 is called with the tail of the list recursively.

With reduce/3 in place, the count/1 implementation becomes much simpler:

1
2
3
4
5
defmodule ListOps do
  def count(list) do
    reduce(list, 0, fn(_, acc) -> acc + 1 end)
  end
end

Conclusion

The exercise has some other operations as well: map, reverse, filter, append and concat. I learned a lot working on the solutions and started to get a feel for functional programming. If you are learning a new language, I would recommend trying Exercism.io. It currently supports 23 languages!


  1. In Elixir, when referring to functions, it is customary to add / and the arity to the name. foo/2 refers to the function foo defined with 2 parameters.

  2. Elixir includes matching a list to it’s head and tail with the [head|tail] syntax. The _ signals that the parameter will not be used.

The REPL: Issue 8 - March 2015

Turning The Database Inside Out With Apache Samza

Based on a talk at Strange Loop 2014, this post was eye-opening. Although it’s supposed to be about Apache Samza, most of the talk is devoted to talking about databases in general and what they are good at: Keeping global state, replication, secondary indexing, caching, and materialized views. This high-level view provided me with a lot of new perspective of how to think of databases. The many illustrations in the article are beautiful. Please go and read.

Your Most Important Skill: Empathy

The legendary Chad Fowler makes the case that empathy is a skill that everyone will benefit from developing further. Provides great list of why that is. Most importantly, he also details how to practice.

Git From The Inside Out

Git has often been criticized for having an inconsistent interface and leaking unneeded abstractions to the user. Some of that criticism is warranted. Nonetheless, git is one of my favorite programs. I use it hundreds of times throughout the day, always on the command-line, complemented by tig, the ncurses client for git. This article talks about the internals of git: How it stores data on disk for commits, trees, objects, tags, branches, etc. It is well written, well organized and a pleasure to read. If you read this guide, it will make it easier for you to interact with git because you will understand it’s intrenals. However, I think you should read it because it shows how great functionality can be achieved with software with minimal dependencies and using only the local filesystem as a data store.