Taming complex software with functional thinking
by Eric Normand
This book explores functional programming in a way that is very approachable for those starting out on that discipline, yet is still worthwhile for people that are already somewhat familiar with functional paradigms.
In Part 1, Normand explores the the difference between data, calculations and actions. Calculations are more traditionally called pure functions: They only depend on the inputs, and produce the same results when run multiple times. Actions are more traditionally called impure functions: The produce side-effects, and their results depend on where they were called. This key distinction is what the rest of the book is based on.
Part 2 explores functional abstractions for iteration,
filter, and how to work with nested data structures. It then it goes on to derive several concurrency primitives and apply them. Concurrency analysis can be quite tricky, but was made very approachable by the use of timeline diagrams.
I thought it was very neat that the author cautions the reader about the pitfalls of over-enthusiasm for newly acquired skills, and provides guidance on how to avoid it. There is also plenty of ideas on how to continue the functional journey. The last chapter even lists out what the author thinks the biggest takeaways are:
- There are often calculations hidden in your actions
- Higher-order functions can reach new heights of abstraction
- You can control the temporal semantics of your code
Efficiency is the Enemy
There’s a good chance most of the problems in your life and work come down to insufficient slack. Here’s how slack works and why you need more of it.
The articles proposes that being very efficiently – constantly doing work – is not necessarily the same as being effective. Furthermore, being so busy can actually work against you, since you don’t have any spear capacity or slack to respond to new opportunities.
It reminded my of Technical Debt Is Like Tetris. In the tetris metaphor, playing the game – while you have room to maneuver – is enjoyable, calm, and allows you to setup the board to reap the maximum points (removing 4 rows at once). When your board is almost full and you don’t have any space, playing feels stressful, panic sets in, and each piece is placed without much consideration. Even if you manage to last a long time, you won’t be making big plays.
In a similar vein, How to Think: The Skill You’ve Never Been Taught – published by in the same website – talks about improving thinking skills by making time to think.
Long-form article about performance at work, and what it means as you age. Holding on to peak performance is impossible: Eventually you will loose it. You can continue finding meaning in the life and work as you age, but it probably means doing different things that you did before. The article also talks about fluid intelligence and crystallized intelligence. The former tends to be greater in younger people, the later in older people, as the accumulate knowledge and wisdom. You career should shift accordingly.
This article explains how there are some things that are possible only when you dedicate effort for long periods of time, i.e. the grind. The author talks about curating a long list of bug tickets that seemed daunting, but was well worth the effort. In that same vein, I think that dedicating effort in learning new things is like that. For example, you might now know any SQL today. If you dedicate 20 minutes daily on lessons and exercises and increasing your understanding, in a year you will probably know more SQL than most developers.
Joe Armstrong, the creator of Erlang, writes why Object-Oriented programming sucks. The main objection is that data and functions should not be bound together:
Functions are imperative, data is declarative.
I found that resonated with me. I write Ruby every day, but tend to write objects that hold data only (e.g. by using
dry-struct), and other objects that hold logic that operates on data. I started using that time after spending some time writing Elixir, which like Erlang, runs on the BEAM virtual machine.
Clément Delafargue explains with lots of detail how his company built a data lake that suits their needs, but avoids the complexity the larger setups may require. It leverages in clever ways Postgres great support for foreign data wrappers. Since their data consumers are all familiar with SQL, using a stable schema as an API to present is a great insight.
Anton Zhiyanov shows-off a lot of SQLite uses. I typically think of it as a database to use in client applications (e.g. software running on desktops or mobile devices). The examples in the article illustrate how it can be used by for data analysis on a day-to-day basis. SQLite is a great database engine.
A typical web application runs several application processes, each fielding web requests behind some sort of load balancer. In Ruby on Rails, each of these processes is typically stateless: Any request can be handle by any of the server processes indistinctly. All state is kept in the database, and on the client’s cookies. Deploying new code can bring unexpected challenges, even on seemingly simple cases.
Let’s explore one of those cases. The the rest of the post I will talk specifically about Ruby on Rails, the framework I know best. I expect the concept to carry over to other frameworks as well.