Bi-temporal data refers to a modeling technique to store and retrieve data that changes on two different axes. The valid time axis refers to the range of time in which data is valid. Transaction time refers to when the system recorded the data. Keeping track of both with expose a very rich data model.
In the simplest data modeling, a system keeps track of the current state. Let’s assume that we are working with a system that keeps track of company personnel. In its
peopletable, it will hold things like first and last names, date of birth, and social security numbers. At first sight, this seem like invariant facts, but upon closer inspection we can see that in reality they are not. People often change their name when their marital status changes, and birth date and social security number are technically facts that don’t change, but often need to be corrected. These two very different reasons for change are often conflated, or worse, not accounted for. Bi-temporal data provides a way to deal with both.
Like so much of the content in Martin Fowler’s website, this article – by Zhamak Dehghani – is a well-though out description of how to reason breaking out a monolith into services. Some choice quotes:
Every increment must leave us in a better place in terms of the architecture goal.
In the context of leaving both the old way and the new way in place:
If the team stops here and pivots into building some other service or feature, they leave the overall architecture in a state of increased entropy. At this point the teams are actually further away from their overall goal of making changes faster. Any new developer to the monolith code needs to deal with two code paths, increased cognitive load of understanding the code, and slower process of changing and testing it.
It’s well worth the read.
Many of the blog post written by large engineering organizations, don’t often apply to smaller organizations. While they are still interesting, reading how Google and Amazon handle load doesn’t necessarily translate into practical advice. This post by Damir Svrtan and Sergii Makagon in the Netflix Engineering Blog is different. They describe how they went about building a new service rapidly, meant to integrate with a variety with other services, even in the face of unknown requirements. Their solution: Hexagonal Architecture.
The idea of Hexagonal Architecture is to put inputs and outputs at the edges of our design. Business logic should not depend on whether we expose a REST or a GraphQL API, and it should not depend on where we get data from — a database, a microservice API exposed via gRPC or REST, or just a simple CSV file.
In particular, they way they defined their core concepts resonated with me. They stuck most of their code into Entities (domain objects), Repositories (read and write data), and Interactors (orchestration classes – i.e. service classes, use case objects).
I’ve been doing a lot of research into multi-service architectures, and I’ve seen many references to how entity services are an anti-pattern. Michael Nygard has a previous article describing just that. Designing services to avoid the anti-pattern is sometimes easier said than done. This post walks the reader on how to avoid the pitfalls with a concrete example modeling services based on the business lifecycle, instead of just focusing on the data that they store.
Yesterday, some family members were expressing some feelings of helplessness and inevitability in response to the COVID-19 crisis. This is what I wrote:
By definition, a pandemic is a global crisis. Taking measures to protect the life of the most vulnerable, is not prolonging the inevitable. Yes, a lot of people are going to die. The difference between action and inaction could be millions of people. The economy is important, in as much as it is the mechanism for distributing goods and promoting wellness. We must think about what we can do to promote wellness for the majority of people. We are all on the same boat. At the moment, our role is social distancing. It can’t last forever. The dichotomy that presents millions of death or the economy as the only two options is false. If efforts are directed – as has been done previously in times of war – we can have both. With massive and repetitive testing programs for detecting infection, early isolation, etc we can start returning to normal life eventually.
Humanity has a enormous capacity to produce cars, televisions, phones, and apps. That capacity can also be used to produce respirators, virus test kits, personal protection equipment, toilet paper and vaccines for all the population.
The current world leadership worries me a lot, but I don’t accept the feeling of inevitability. This is a difficult test for humanity, but we can face it on.
The original version in Spanish:
Por definición una pandemia es una crisis mundial. Tomar medidas para proteger la vida de los más vulnerables, no es postergar lo inevitable. Si, se va morir mucha gente, pero la diferencia son millones de personas. La economía es importante, en función que es el mecanismo para distribuir bienes y promover el bienestar. Debemos de pensar en lo que podemos hacer para promover el bienestar de la mayoría de la personas. Todos estamos en el mismo barco. Por el momento lo que nos toca hacer, es el distanciamiento social. No se puede mantener para siempre. La dicotomía que divide las opciones entre millones de muertes y la economía es falsa. Si se orientan los esfuerzos – como se ha hecho anteriormente en tiempos de guerra – se pueden tener las dos. Con programas de pruebas de virus masivas y repetidas para detectar infecciones, aislamiento temprano, etc se puede empezar a regresar a la normalidad eventualmente.
La humanidad tiene una capacidad inmensa de producir coches, televisiones, teléfonos y apps. Esa capacidad también puede producir respiradores, pruebas de virus, material de protección personal, papel de baño, y vacunas para toda la población.
El liderazgo actual mundial me tiene muy preocupado, pero no acepto el sentimiento de inevitabilidad. Esta es una prueba muy difícil para la humanidad, pero se puede encarar.
Jacob Gabrielson about the challenges of distributed systems at Amazon. He comes up with failures modes inherent in all distributes systems, and calls them the eight failure modes of the apocalypse. Engineering distributed systems is hard, being cognizant about all failure modes helps by providing some structure to tackling the problem.
Rohit Kumar points out that Rails 6.1 will add strict loading support. With it turned on, Rails will raise an error instead of allowing association lazy loading. I welcome this change. Lazy loading seems like a feature that speeds up development in Rails, but is the cause of N+1 queries. I have yet to work on Rails app that doesn’t have performance issues because of this.
Egor Rogov gives an overview of the recursive syntax in SQL, and walks through a step-by-step example of how to write a useful, performant recursive query, that solves a realistic business-logic example.
Alex Hudson writes a post tackles the idea that soon, we will be able to produce software with significant functionality that doesn’t require coding. I agree with author’s conclusions: The goal is probably a pipe-dream that has been oversold. I’ll add that I’ve been the block a few times, and seen that software generated without change control quickly becomes unmaintainable. Back in the day, MS Access allowed power users to deal with data in a much better fashion than excel files. However, evolving them was very painful.
In is an inspiring post by Will Larson, discusses career growth in software engineering. He divides the area of focus into Pace, People, Prestige, Profit, and Learning. Growth in different areas comes at different times. Like in finance, investing early brings compounding gains.