Saturday, June 27, 2020

Checked exceptions break composition

A.K.A. Always Throw Runtime Exceptions or their subclasses

A typical Java or C++ function can potentially come with an exception specification: for example, a method can declare that it throws exceptions of a particular type (eg. IOException, std::bad_alloc etc.) and clients need to handle that exception being thrown with a try-catch block. This seems good at the outset till we spend time thinking through what this does to the type of the function.

A typical function in a happy-go-lucky world either succeeds or fails with an exception because of something beyond its control. If it succeeds, it returns with a value of the provided return type (let's call it SuccessValueType). If it fails with an exception (eg. a file read error or a mem allocation error), it throws the exception and the error handling parts of the code run. In type terms, the return type of the function is Either<SuccessValueType, RuntimeExceptionType> (where the RuntimeExceptionType is an implicit return type of the function). If all functions agree that RuntimeExceptionType is the implicit secondary return type, functions and try-catch blocks compose beautifully. This is because every function call site becomes an implicit early return point with a valid return value from the function. As a corollary, every try-catch block wrapping the function also makes little to no assumptions of the kinds of exceptions its likely to receive and that builds in flexibility for code evolution.

Here's an example:
Function 1 => calls => Function 2 followed by Function 3; both Function 2 and 3 can only throw RuntimeExceptions
If either of these functions throws, the RuntimeException is propagated as an "early return" from Function 1 without any changes. You can stack as many layers of nesting in the code and the return types and early return behavior remain compatible (because all the functions agree that RuntimeExceptionType is an implicit return type).
The application then adds error handling code close to the top-level of the processing hierarchy and presents the error to the user (as a form of recovery) or retries or notifies an engineer to take a look. If we need to add additional context to the exception, at any level a try-catch block can be introduced to attach context information to the exception and rethrowing the RuntimeException. This introduction of an intermediate try-catch is a purely local change that composes well with try-catch blocks further up the stack (removing a try-catch similarly composes well). Adding new libraries or call paths to the code remains a purely local operation and does not affect the type hierarchy or the error-handling try-catch structure.

Contrast with what happens when a checked exception is introduced. The function type changes from bi-valent to tri-valent: Either<SuccessValueType, RuntimeExceptionType, CheckedExceptionType>. Note that avoiding the RuntimeExceptionType is not possible (else you'll have code littered with redundant bad_alloc, io_exceptions and the like that are meaningless). With a trivalent return type from a function, we have 2 options: 

1. Convert the function back into the bi-valent return type by introducing a try-catch block, catching the checked exception and rethrowing as a RuntimeException.
2. Propagate the checked exception and ask our clients to update their code.

(1) is of-course the reasonable thing to do. It's a local operation, client code doesn't have to change and we're back to having to deal with only a single type of failure (either the function succeeds or the function fails with a RuntimeException).
(2) is a world of pain. If we're in this world, every new introduction of a checked exception means that significant chunks of the program have to change to include the new checked exception type.

Going back to our original example:
If Function 1 => calls => Function 2 followed by Function 3 and both of them throw checked exceptions of different types, the return type of Function 1 becomes Either<SuccessValueType, RuntimeExceptionType, F2CheckedExceptionType, F3CheckedExceptionType> (essentially the union of all the checked exceptions show up in the return type signature). As we keep adding more nested functions, this type list keeps expanding. 

In practical terms, this means that the developer adds "throws F2CheckedExceptionType, F3CheckedExceptionType, ..." to each of the caller functions in order to get them to compose. All the try-catch blocks similarly bloat to handle all the possible failure cases. Beyond small-sized codebases, this is completely infeasible because these signature changes and the try-catch handlers keep propagating out throughout the codebase. This hurts dev velocity.

From a recovery perspective, these checked exceptions are typically handled just one-level up the call stack at the lowest level of library code (to avoid the exception signature blowout) and a local resolution is done (retry a few times and then fail). This is typically not an optimal solution (eg. for an out-of-disk-space error, a batch processing application might prefer an immediate crash, a streaming application might prefer a continuous retry but without propagating the error all the way up to the application, this choice of recovery can't be made reasonably and the only way out is to pass down configurations to control this behavior... a gargantuan mess). 

In the RuntimeException only world, the retry configuration stays at the top level where things can be handled based on the execution environment.

In summary, as a practical matter, professional software engineers should ensure that their functions only throw unchecked exceptions (RuntimeExceptions or similar). Checked exceptions are actively harmful to dev velocity in large codebases and should be avoided. Google avoids this tar pit by banning exceptions from C++ code (for historical reasons), LinkedIn & Pinterest actively utilize a RuntimeException based Java codebase and you should encourage this too.

Wednesday, January 15, 2020

IO numbers that everyone should know

In the Numbers Every Programmer Should Know, one set of numbers that I've always found missing were IO numbers (HDD vs SSD - random reads / writes). I found a really good source on StackExchange for these numbers and for the sake of posterity, I'm documenting this here (for me and for you):

  • SSD | HDD Sequential Read/Write : 700 MB/s+ | 115 MB/s (6x diff) 
  • SSD | HDD Random Read 512KB : 160 MB/s | 39 MB/s (4x diff)
  • SSD | HDD Random Write 512KB : 830 MB/s | 57 MB/s (14x diff)
  • SSD | HDD Random Read 4KB27 MB/s | 0.5 - 1.5 MB/s (17x diff)
  • SSD | HDD Random Write 4KB135 - 177 MB/s | 0.7 MB/s (192x+ diff!)
The bottom line is that unless you're thrashing the HDD with lots of 4KB random writes, the HDD should not be tapped out till about 30+ MB/s (and an SSD should be just fine till about 150 - 300 MB/s). If you're seeing an HDD tapped out at 3 MB/s, then you're either not writing sequentially or your block size of writes is too small. If you're seeing an SSD tapped out at < 100 MB/s, it's almost certainly a software bug and not an IO limitation. In either case, the basic norm holds - if you can, always use SSDs, they usually save you money in CPU time.


Thursday, June 20, 2019

Modern Programming: never use Inheritance; use Composition instead

Inheritance vs Composition is an age old debate. The world has evolved enough that it's time to put this discussion to rest. There is no good reason to ever use inheritance in new code. Composition is functionally identical to Inheritance, produces superior outcomes, flatter class hierarchies and more flexible code than inheritance. Composition also does not violate encapsulation and avoids classes of issues produced by unexpected polymorphic dispatch to implementations. We get better class design for free as well. 

In short, never use inheritance - always express the same code re-use through composition and be happy. Let's go through each of the points one by one:

1. Composition is functionally identical to inheritance.
This one is easy to work with. When inheriting implementations, all subclass methods have an implicit parameter (an instance of the superclass). Composition just makes this parameter explicit as a constructor argument and takes away the superclass. Code reuse through calling the superclass method can be done by just calling the same method on the composed object.

2. Composition produces superior outcomes
When operating in an inheritance class hierarchy, a code dependency and a type dependency become coupled. Very often, this is not necessary. For example, a 2DSurface class may have a computeArea method that you want to re-use, however your class is not a 2DSurface. With inheritance, the 2DSurface type and the code reuse are coupled. With composition, the two are separate. This is advantageous when you want to introduce code between the levels of the hierarchy. Eg. You want to add a "TimingClass" that wraps calls into to the base class. 

3. Composition produces flatter class hierarchies
When operating within an inheritance hierarchy, the more variants or ways the code gets reused, the levels of the hierarchy grow deeper with "leaf" level code overriding the . Code with more than 2 layers of hierarchy is very difficult to manage since you have to trace up and down two layers of code that calls each other. Composition forces a single, clean extension point in the code and adding in classes like the "TimingClass" above do not increase the depth of the inheritance hierarchy.

4. Composition does not violate encapsulation a.k.a. friendly refactoring
Changing a base class method's access and mutations of a base class member is virtually impossible in a non trivial class hierarchy (because subclasses may rely on protected methods or direct access to state which violates encapsulation). The solution is to solely depend on the public api methods of a class (so that internal state may be refactored without affecting behavior). Composition forces this in a direct manner (enforced by the compiler).

5. Composition prevents polymorphic dispatch bugs
A common (bad) design pattern is to have an abstract base class call the abstract method that must be implemented by subclasses. This produces bugs when code is called across different levels of the hierarchy. Check out the first example on this page.

6. Polymorphic dispatch can be implemented better through generics and type bounds
The common process of dispatching through a single "super-class" hierarchy is antiquated. Generics with type bounds provide a much better substitute. Eg. Shape.getArea() doesn't need to be implemented by writing all code against the abstract Shape class with the Shape class providing .getLength() and .getWidth() that get overridden. A much better way is to implement finer grained interfaces (traits) of the form: ShapeWithArea (.getArea()), RectilinearShape  (.getLength(), .getWidth()) . Composition can then fully express just the needed dependencies (eg. Class<T extends ShapeWithArea> if it just needs a ShapeWithArea but doesn't need a RectilinearShape and if it needs both, the type parameter becomes Class<T extends ShapeWithArea, RectilinearShape>). 

7. Dependency injection / Testability is much easier with composition
In composition, the superclass is just another dependency of the class and thus can be mocked and faked. This is much harder to do when trying to mock out the superclass itself in the type hierarchy if inheritance is used. Even if we tried to stub out the superclass methods, we wouldn't know if new ones got added.

Overall, all the advancements in our understanding of type theory, category theory point to composition being the "right" abstraction for composing code. Inheritance is a special case of composition (a trivial subset), has worse properties around unit-testing, dependency injection and code maintenance. Given the above, inheritance from concrete or abstract classes (especially those with any form of state) should be strongly avoided. Instead, implementation of interfaces + composition should be the strongly preferred approach. 

Tuesday, May 21, 2019

Managing Humans with Math

I was in a funky corner of the internet, reading about reinforcement learning when I chanced upon this article that compares a number of reinforcement learning algorithms. Since we as humans are glorified neural networks amenable to reinforcement learning and companies are nothing but hierarchical relationships between humans, it was very interesting to go through that article with a business point of view.

Long story short, the article figures out various conditions in which the algorithms are able to break down a task recursively, the optimality convergence conditions and the "knowledge" required of the lowest level agents to learn successfully. From my reading of the article, I'm most drawn to the "Options" style of management: work with high quality people, give them maximum freedom to act and step in only at decision points.

This is the style of management that I'd grown used to at Google (and that I've seen work really well there) and is quite in contrast to other managements that I have experienced. Not to say that the other managements were ineffective, they just took different routes to get to the same goals.

In summary: prefer the "Options" style of management, work with smart people, give them all the freedom they want and deserve, provide a strong learning function and clear feedback on how they're performing. Step in only at "Choice" points and then too defer to people with context of the problem. Most decisions are reversible, so just take one and let's keep going!

Wednesday, February 13, 2019

Hermetic MySQL in the modern world

So you're looking to run a MySQL docker instance without any environmental dependencies? Here's how to do that:

$ docker run --name mysql-test -e MYSQL_ALLOW_EMPTY_PASSWORD=true -e MYSQL_DATABASE=testingdb -e MYSQL_USER=scott -e MYSQL_PASSWORD=tiger -p 3306:3306  mysql/mysql-server:latest

Documentation on the various environment variables are located here:

To connect to your MySQL instance without depending on my.cnf:

$ mysql --no-defaults --host 127.0.0.1 --port 3306 --user scott --password --protocol tcp testingdb

That's it. You have a running instance of mysql from a docker image and you're connecting to it with a generic mysql command line client. Have fun.

Saturday, February 02, 2019

Alerts should be actionable a.k.a.: Do not email on success!

Following the Unix philosophy: Do one thing, do it well and be quiet about it.

In software engineering, if you're writing a system that's useful and suddenly, one day, you think it's nice to notify users via email that their useful thing is being done, you're making a mistake. 

Emails from software systems should be actionable: if a system is sending an email to a user, it should be helpful, provide enough context about who it is, where it's running, who owns / runs it and what the problem is that requires human attention. Ideally, the alert email should clearly specify the next steps and the dashboards that can be used to ensure that the problem is fixed.

The worst offense of the system is to send out success emails. This fails on 2 counts:
1. Success emails are not actionable - if I read a success email, I am informed and I promptly create a Gmail filter to never see another success email from the system again. The system made me do active work to ignore it.
2. Success emails are not trackable - if I want to see what's the ratio of success to failures of the system, Gmail is a terrible way to do it. From first hand experience, measuring alert volume over time in Gmail is a time sink. Please build a dashboard and make the world a better place. Your future self will thank you.

The best alert emails are those that just tell me the commands to fix the problem. Oh happy me, I don't have to read a runbook, talk to people, poke around dashboards to see what the problem is. Run a few commands and presto, it's fixed.

Take some time and make your life better, don't send success emails and send actionable emails on failure.

Cheers!
Divye

Thursday, January 31, 2019

How to debug a crashing docker container

If want to run your docker process with some tweaks because it's crashing in your docker container and causing the container itself to stop (without giving you a way to inspect the files on the image), here's the magic command to start it with just bash.

(I found this after quite a bit of hunting on the internet, the magic flag is --entrypoint and don't forget the -s at the end)

Here's a sample command:
docker run -it --entrypoint /bin/bash  $IMAGE -s

Sourced from: