Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compositional Elixir—Contexts, Error Handling, and, Middleware #8

Open
vereis opened this issue Mar 6, 2020 · 0 comments
Open

Compositional Elixir—Contexts, Error Handling, and, Middleware #8

vereis opened this issue Mar 6, 2020 · 0 comments

Comments

@vereis
Copy link
Owner

vereis commented Mar 6, 2020

Compositional Elixir—Contexts, Error Handling, and, Middleware

This post serves as a follow-up to a post I recently wrote about Railway Oriented Programming, and abusing Elixir's with statement to great effect. As part of that post, I briefly talked about one major advantage of the with pattern being the deference of error handling to the site at which one needs to consume the errors, rather than sprinkling error handling code all over the place.

The main idea is that all of the business logic you write should be made of small, composable units which you can string together in such a way that you shouldn't have to handle errors in each individual unit. The way I build programs is that each small unit essentially has a hard dependency on other units that it itself calls: if any of these sub-units fail, then that failure should propagate itself up to the client or call site—de-coupling high-level logic/thinking such as "what do I do if this subsystem is down from the expectation that the happy path should just work.

TLDR; the high-level pattern

For those uninterested or just wanting to get the gist of what was discussed in the linked post above: the with statement gives two major wins over Elixir's simpler case construct:

  1. It stops you from having to break every possible piece of conditional logic in either another function (in many cases, adding much noise to a module) or in a further nested case (which only serves to make things much harder to follow, especially when combined with the second point).with`.

  2. It allows you to be very terse. If you don't explicitly want to handle errors, you don't have to. Errors automatically bubble up for you to conditionally handle, or not, as you please. This is especially useful if you otherwise would have many nested case statements—something common in the Erlang world, and perhaps only isn't so in the Elixir world because of constructs such as with.

When I'm writing logic in Elixir, I almost always write it in the following form:

def some_function() do
  with {:ok, step_1} <- do_step_one(),
       {:ok, step_2} <- do_step_two(),
       {:ok, step_3} <- do_step_three() do
    {:ok, step_3}
  end
end

If I call such a function, any of do_step_one/0, do_step_two/0, or do_step_three/0 may fail. If either one of these functions doesn't return something in the shape {:ok, _something}, I will receive it. This means that everything is dandy so long as those three functions each return a domain-specific and helpful error.

Assuming function do_step_one/0 fails, and assuming it does somethign super important as follows:

defp do_step_one() do
  case check_database_connection() do
    {:ok, %Db.Connection{} = connection} ->
      {:ok, connection}

    _otherwise ->
      {:error, :database_down}
  end
end

As a caller of some_function/0, I will very nicely get a reason as to _why my invocation of some_function/0 failed. No additional code needed!

Alas, not all systems are so conveniently self-contained. The majority of the systems I've worked on in the last two years as a consultant on a few large Elixir projects for multiple clients have been services exposed on the internet and utilised by numerous different devices. The kind of "letting errors propagate to the client" sounds great, but what does that actually mean in the context of a large Elixir application that's serving users over some API or website versus someone playing around in a REPL?

High-level architecture for a Phoenix application

Phoenix is probably one of the most common frameworks we see as Elixir developers. Phoenix has a very loosely defined convention of splitting up your business logic into the following sections:

  • Controllers: these are modules responsible for mapping HTTP requests to business logic in Contexts. Controllers typically just encapsulate CRUD actions, but ultimately, are just small wrappers transforming external requests into one (or more) Context function invocations in series.

  • Contexts: these are small modules that encapsulate "units" of business logic. In never-so-certain terms, I might have a high-level context called Auth, which provides functions that work on entities vaguely related to the concept of "authorization", such as: create_user, validate_user_password, user_belongs_to_organization? etc.

  • Schemas: these are modules that define structs for your contexts to work against. More often than not, these structs are backed by Ecto (regardless of whether or not you're using a database). These modules might provide minimal utility functions for performing work on these structs, validating them, etc.

Even if you're not using Phoenix in your Elixir app, I've definitely found much merit in this way of splitting up a system. In the majority of projects or libraries I work on, I adopt something similar. You can get further decoupling using umbrella projects instead of standard Elixir projects, but this is not an important detail for now.

One nice thing about this controller/context/schema split in particular is that not only is the concern split (web-layer/business-logic-layer/database-layer) evident, the error handling concern is appropriately split too:

# This is our controller:
defmodule MyAppWeb.SomethingController do
  use MyAppWeb, :controller

  alias MyApp.Somethings
  alias MyApp.Somethings.Something

  def show(conn, %{"id" => id}) do
    case Somethings.get_something_by_id(id) do
      {:ok, %Something{} = something} ->
        conn
        |> put_status(:ok)
        |> render("show.json", %{something: something})

      {:error, reason} when reason in [:not_found, ...] ->
        conn
        |> put_status(reason)

      _error ->
        conn
        |> put_status(:internal_server_error)
  end
end

# This is our context:
defmodule MyApp.Somethings do
  alias MyApp.Something
  alias MyApp.Repo

  def get_something_by_id(id) do
    Something
    |> Something.where_id(id)
    |> Something.not_deleted()
    |> Repo.one()
    |> case do
      %Something{} = something ->
        {:ok, something}

      nil ->
        {:error, :not_found}
    end
  end
end

# This is our schema:
defmodule MyApp.Somethings.Something do
  use Ecto.Schema
  import Ecto.Query

  schema "somethings" do
    field :something, :string
    field :deleted, :boolean, default: false
  end

  def changeset(%__MODULE__{} = entity, attrs) do
    entity
    |> cast(attrs, [:something, :deleted])
  end

  def where_id(query, id) do
    from s in query,
      where: s.id == ^id
  end

  def where_not_deleted(query) do
    from s in query,
      where: s.deleted == false
  end
end

Note that each layer of our business logic example here handles its own concerns:

  • Our schema only cares about providing composable, minimal functions which detail operating with the database. It validates stuff going into the database and provides composable queries which can be chained together arbitrarily.

  • Our context uses the small functions available to it from schemas (and other contexts, such as MyApp.Repo) to provide atomic chunks of logic: fetch X from the database, create a new Y, etc.

  • Our controller maps both successful and unsuccessful invocations of our context functions into domain-specific responses, which we return to the client making the web request.

An example of composability and extensibility

This is essentially the entire idea at work already, just a straightforward version of it. Imagine, instead of the simple example above, we're working to authenticate a user trying to log into a service. I'll leave out the schema implementation details for brevity, but an example would end up looking like this:

# This is our controller:
defmodule MyAppWeb.AuthTokenController do
  use MyAppWeb, :controller

  alias MyApp.Auth
  alias MyApp.Auth.User
  alias MyApp.Auth.Token
  alias MyApp.Auth.Organization

  @encodable_errors [:unauthorized, :not_found, :invalid_password]

  def create(conn, %{"username" => username, "password" => password, "org_code" => org_code}) do
    with {:ok, %Organization{} = organization} <- Auth.get_organization_by_code(org_code),
         {:ok, %User{} = user} <- Auth.get_user_by_username_and_organization(username, organization),
         {:ok, %User{} = user} <- Auth.validate_user_password(user, password),
         {:ok, %Token{} = token} <- Auth.create_auth_token(user, organization) do
       conn
       |> put_status(:ok)
       |> render("show.json", %{token: token})
    else
      {:error, reason} in @encodable_errors ->
        conn
        |> put_status(reason)

      {:error, %Ecto.Changeset{} = error} ->
        conn
        |> put_status(:bad_request, build_error(error))
    end
  end

  # Some extensive pattern matching
  defp build_error(error), do: ...
  defp build_error(error), do: ...
  defp build_error(error), do: ...
  defp build_error(error), do: ...
end

# This is our context:
defmodule MyApp.Auth do
  alias MyApp.Auth.User
  alias MyApp.Auth.Token
  alias MyApp.Auth.Organization
  alias MyApp.Repo

  @ttl_minutes 60

  def get_organization_by_code(org_code) do
    Organization
    |> Organization.where_org_code(org_code)
    |> Repo.one()
    |> case do
      %Organization{} = organization ->
        {:ok, organization}

      nil ->
        {:error, :not_found}
    end
  end

  def get_user_by_username_and_organization(username, %Organization{id: organization_id}) do
    User
    |> User.where_username(username)
    |> User.where_is_authenticated()
    |> User.where_organization_id(organization_id)
    |> User.is_not_deleted()
    |> Repo.one()
    |> case do
      %User{} = user ->
        {:ok, user}

      nil ->
        {:error, :unauthorized}
    end
  end

  def validate_user_password(%User{} = user, password) do
    if User.encrypt_password(password) == user.hashed_password do
      {:ok, user}
    else
      {:error, :invalid_password}
    end
  end

  def create_auth_token(%User{} = user, %Organization{} = organization, ttl_minutes \\ @ttl_minutes) do
    %Token{}
    |> Token.changeset(%{user: user, organization: organization, ttl_minutes: ttl_minutes})
    |> Repo.insert()
  end
end

Ignoring whether or not we want to expose such fine-grained errors for an auth flow (we never do!), we can see the strength of being able to compose everything. Literally every single piece of business logic here is built to be trivially composable. If we wanted to make it such that a user could request a higher token ttl, we could extend the controller function:

def create(conn, %{"username" => username, "password" => password, "org_code" => org_code} = params) do
  custom_ttl = Map.get(params, "ttl", 1200)

  with {:ok, %Organization{} = organization} <- Auth.get_organization_by_code(org_code),
       {:ok, %User{} = user} <- Auth.get_user_by_username_and_organization(username, organization),
       {:ok, %User{} = user} <- Auth.validate_user_password(user, password),
       {:ok, %Token{} = token} <- Auth.create_auth_token(user, organization, custom_ttl) do
     conn
     |> put_status(:ok)
     |> render("show.json", %{token: token})
  else
    {:error, reason} in @encodable_errors ->
      conn
      |> put_status(reason)

    {:error, %Ecto.Changeset{} = error} ->
      conn
      |> put_status(:bad_request, build_error(error))
  end

  # Some extensive pattern matching
  defp build_error(error), do: ...
  defp build_error(error), do: ...
  defp build_error(error), do: ...
  defp build_error(error), do: ...
end

And suppose if Auth.Token.changeset/2 had some validation for the minimum or maximum ttl that can be set? Well, those errors would propagate up through the context and the controller, be serialized into some format the client can understand, and be returned to the user!

This is the power of writing code with respect primarily towards the happy path.

Committing to happy-path-programming: Middleware!

Now that we've outlined the gist, we can dive even deeper into this cult of composability: let's remove error handling from view entirely!

If we take the auth flow example we've been working on, this is all well and good, but we've ended up with a bunch of boilerplate. We still need to extensively pattern match and handle errors from our contexts to map said errors into something client-specific for every action we want to expose through our controllers. Isn't this orthogonal to the point of Railway-Oriented programming? Perhaps.

Unfortunately, for any case where we expect an error to occur, we really do want to map it for the user's benefit. Unexpected errors will always arise, and unexpected errors warrant exceptional behaviour (i.e. crashing the process responsible for handling the user's request a-la OTP), but errors that are entirely within the domain of errors we expect? We should handle those.

Thankfully, instead of writing macros we inject everywhere (a common approach on a few projects I've seen), or just decided to say "screw it" and manually implementing the boilerplate, we can be a little smarter about it.

I'm not too familiar with the internals of Phoenix, but essentially, all requests that come in go through a pipeline of functions (plugs). I'm not sure if semantically your controller function is a plug, but it's essentially just another function that gets executed as part of this pipeline. When we call the render function on the happy path, we're essentially just saying: "return this to the client who requested this", but Phoenix actually has a built-in mechanism for handling the case where we aren't explicitly calling render: a fallback plug!

defmodule MyAppWeb.AuthTokenController do
  use MyAppWeb, :controller
  action_fallback MyAppWeb.ErrorHandler

  alias MyApp.Auth
  alias MyApp.Auth.User
  alias MyApp.Auth.Token
  alias MyApp.Auth.Organization

  @encodable_errors [:unauthorized, :not_found, :invalid_password]

  def create(conn, %{"username" => username, "password" => password, "org_code" => org_code}) do
    with {:ok, %Organization{} = organization} <- Auth.get_organization_by_code(org_code),
         {:ok, %User{} = user} <- Auth.get_user_by_username_and_organization(username, organization),
         {:ok, %User{} = user} <- Auth.validate_user_password(user, password),
         {:ok, %Token{} = token} <- Auth.create_auth_token(user, organization) do
       conn
       |> put_status(:ok)
       |> render("show.json", %{token: token})
    end
  end
end

defmodule MyAppWeb.ErrorHandler do
  @http_errors ["forbidden", "not_found", "conflict", ...]

  def call(conn, {:error, "unauthorized"}) do
    conn
    |> put_resp_header("www-authenticate", "Bearer")
    |> put_status(:unauthorized)
    |> halt()
  end

  def call(conn, {:error, http_error}) when http_error in @http_errors do
    conn
    |> put_status(http_error)
    |> json(%{errors: [%{code: status}]})
    |> halt()
  end

  ...
end

This is much nicer, in my opinion. We massively clean up the core business logic of our main controller in-so-far that it is essentially just describing a bunch of steps that have to happen to create an auth token. If anything unexpected happens, those errors are handled in a module whose entire purpose is to map context errors into client errors. We have the added benefit of centralising how we handle errors and thus returning errors consistently no matter where they happen, so long as our error handler is being used!

Anything unexpected will also either propagate to the client untouched (which is one valid option) or will just cause a crash (another valid option). That implementation detail can be left solely to you 😊

Addendum: Error handling for Absinthe and GraphQL

As stated previously, Phoenix is ubiquitous on Elixir projects. The majority of Elixir projects I've been brought to help out on have had Phoenix as part of their stack.

This is also true of Absinthe, a really nice framework for serving GraphQL.

At the time of writing, error handling honestly isn't even nearly standardized for GraphQL. I've worked on approaches that use union types to represent typed errors for the client as well as just settling for the more standard top-level errors list.

While there is a bunch of development here that will change whether or not this is the best thing to do going forwards, at the time of writing, we've ended up extending our error-handling Middleware idea to Absinthe as well.

Much like Phoenix, Absinthe is essentially designed as a pipeline of functions that end up with your resolvers being executed. GraphQL Schemas and their corresponding resolvers are kind of analogous to Phoenix controllers in that they essentially just call functions in our standard contexts.

Resolvers in Absinthe are expected to return either {:ok, _some_happy_path_result} or {:error, some_supported_error_format}, but much like our Phoenix controller functions, we can implement a middleware to take care of anything that isn't expected transparently.

A similar middleware to the Phoenix one can be implemented as follows:

defmodule MyAppGraphQL.ErrorHandler do
  @behaviour Absinthe.Middleware

  def call(%Absinthe.Resolution{errors: errors} = resolution, _config) do
    %Absinthe.Resolution{resolution | errors: Enum.flat_map(errors, &process_error/1)}
  end

  defp process_error(:unauthorized) do
    [{:error, %{status: 401, error: %{code: "unauthorized"}}}]
  end

  ...
end

And once written, it either needs to be added to Absinthe's top-level middleware execution pipeline, which is executed for all resolvers, or to individual resolvers:

# Top level middleware registration
defmodule MyAppGraphQL.Schema do
  use Absinthe.Schema

  def middleware(middleware, _field, _context) do
    middleware ++ [MyAppGraphQL.ErrorHandler]
  end
end

# Resolver specific middleware registration
field :auth_token, :auth_token do
  middleware MyAppGraphQL.AuthenticationErrorHandler
  ...
end

Addendum: More complex Ecto.Schema/Ecto.Query composition

I'm often asked how we can write composable functions to dynamically build queries involving all sorts of things such as joins, sub-queries etc.

This is a good question since the naive implementation gets verbose and extremely brittle very quickly.

Typically, if I need to define functions that do complex joins, instead of doing it as follows:

defmodule MyApp.Auth.User do
  use Ecto.Schema
  import Ecto.Query

  alias MyApp.Auth.Organization

  schema "user" do
    field :username, :string
    belongs_to :organization, Organization
  end

  def changeset(%__MODULE__{} = entity, attrs) do
    entity
    |> cast(attrs, [:username, :organization_id])
  end

  def where_id(query, id) do
    from u in query,
      where: u.id == ^id
  end

  def join_organization(query) do
    from u in query,
      join: o in assoc(u, :organization)
  end

  def where_organization_region(query, organization_region) do
    from u, o in query,
      where: o.region == ^organization_region
  end
end

# Used as follows:
def user_has_region?(user_id, region) do
  User
  |> User.where_id(user_id)
  |> User.join_organization()
  |> User.where_organization_region(region)
  |> Repo.exists?
end

I tend to use what Ecto calls named bindings, which instead of composing queries and joined entities based on position a-la from a, b, c, d in query, ..., you can do the following instead:

defmodule MyApp.Auth.User do
  use Ecto.Schema
  import Ecto.Query

  alias MyApp.Auth.Organization

  schema "user" do
    field :username, :string
    belongs_to :organization, Organization
  end

  def changeset(%__MODULE__{} = entity, attrs) do
    entity
    |> cast(attrs, [:username, :organization_id])
  end

  def where_id(query, id) do
    from u in query,
      where: u.id == ^id
  end

  def join_organization(query) do
    if has_named_binding?(query, :organization) do
      query
    else
      from u in query,
        join: o in assoc(u, :organization),
        as: :organization
    end
  end

  def where_organization_region(query, organization_region) do
    query = join_organization(query)

    from u, organization: o in query,
      where: o.region == ^organization_region
  end
end

# Used as follows:
def user_has_region?(user_id, region) do
  User
  |> User.where_id(user_id)
  |> User.where_organization_region(region)
  |> Repo.exists?
end

This way, not only do we not have to care what order entities were joined in (which for very complex queries, is hard to ascertain, and for usages in many functions across many different contexts, is basically impossible to guarantee), but it also means we never have to explicitly call join_organization/1, hiding more low-level implementation logic. Since join_organization/1 is a no-op if the relation is already joined as well, there's no harm in building extremely complex flows around this!

In addition to this, if we have functions that need to peak into the fields of structs that may not have been preloaded yet, one pattern I've ended up following perhaps breaks the encapsulation of concerns for structs into that struct's schema (or that struct's embodying context) is simply to call MyApp.Repo.preload(my_struct, [:the_field_i_care_about]) whenever I need to.

MyApp.Repo.preload/2 is actually similar to the join_organization/1 function we defined in that it only preloads the association if it hasn't already been preloaded, meaning that this is essentially costless! Nice 💪

Addendum: Compounding guarantees and unit testing

In short, one other advantage little talked about to this approach of building complex flows by composing queries and delegating error handling to the edges of your application is the improved testing story.

If you unit test each of the functions in your context, you know your assumptions about your schema are sound, and your core business logic functions work as expected. This gives you a degree of guarantees.

Following this, if you unit test your Phoenix controllers, you need only test one main thing: does your controller correctly transform the parameters given to it into parameters that you can pass into your context functions? If so, you inherit your context function unit tests; likewise for Absinthe resolvers.

If you unit test your Phoenix error handler, you gain guarantees about how your system performs under various error cases. Likewise for our Absinthe middleware.

Lastly, Phoenix and Absinthe aren't the only frameworks/patterns for building Elixir apps. One common domain in most Elixir/Erlang applications are any gen_servers, gen_statems or any other OTP behaviour.

These, like Absinthe/Phoenix controllers, essentially are just a different way of composing functions that should exist in your contexts, and thus can be tested the same way: unit test the underlying context functions and then unit test each process's handle_$callback functions. You often don't even need to spin up a process to exhaustive test it, and if you do, one could make the argument these belong in a supervisor test anyway 😉

Conclusion

Hopefully, following this, you'll find that your Phoenix controllers, GraphQL resolvers, OTP Processes, and essentially anything else are a lot easier to reason about and extend!

These solutions were inspired by solutions we put in place for many different large production Elixir projects in different business domains. These patterns have essentially become my go-to tools for taming complexity and exponentially increases developer productivity.

I hope these patterns and thoughts help you as much as they've helped my team and me! Thanks for reading 💪

@vereis vereis changed the title An approach for error handling in Phoenix and Absinthe Compositional Elixir—Contexts, Error Handling, and, Middleware Apr 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant