Implicits, type classes, and extension methods, part 3: conversions and implicit-based patterns

In previous posts, we covered the most basic use cases of implicits. However, to complete the image we not only need to understand how they can provide instances but also how they can transform them. Once we understand that, we can talk a bit about some patterns that combine both implicit parameters and conversions.

Implicit conversions

The most infamous implicit conversion I know is scala.Predef.any2stringadd. It might turn virtually Any object into any2stringadd, which adds + method and allows to concatenate objects like strings. Thing is, usually you want to add to number or collection, but if you messed you types you prefer to fail and know about it. any2stringadd will turn it into String concatenation, so the error you receive will be very far from the mistake you made.

Another issue with implicit conversions I learned is how collections work. Basically, Map is an example of a PartialFunction (you can can call apply and isDefinedAt on it), which in turn extends Function (which should be a total function - the perfect example that blindly following OOP actually hurts!). That means that if you made you map implicit, it will also be treated as an implicit conversion, so Scala will try to convert by key-value lookup.

So, is there any use case for implicit conversions? A reason why they were not removed completely?

Pimp-my-library pattern

Actually, there is. If we only want to create a decorator, which will perform one operation on an object, and then disappear (by either returning a result or an original object), then everything should be fine. Chances of things going wrong will drop even further if we make sure, that our decorator is not overly greedy when it comes to wrapping or when we must import conversion ourselves. (Implicit conversions themselves needs to be enabled either by import or by compiler flag).

class DoubleOps(val double: Double) extends AnyVal {
  isZero: Boolean = double == 0

implicit def doubleOps(double: Double): DoubleOps = new DoubleOps(double)

This way we are extending the original object with some additional methods, without modifying the original class. Such methods are called extension methods, classes that provide them usually have ExtensionMethods or Ops suffix, while the whole pattern is often referred to as pimp-my-library pattern.

The example above can be shortened using the implicit class syntax:

implicit class DoubleOps(val double: Double) extends AnyVal {
  isZero: Boolean = double == 0

Implicit classes have some limitations: they cannot be top-level objects, so we cannot put them outside a class/object/trait. If we want to use AnyVal we cannot put them in class/trait either. So, usually, you’ll end up putting them into an object or maybe package object.

This method addition is heavily used by both Cats and Scalaz. For instance for our Monoid and Show type classes:

implicit class MonoidSyntax[A](val a: A) extends AnyVal {
  def |+|(a2: A)(implicit monoid: Monoid[A]): A =
    monoid.append(a, a2)

implicit class ShowSyntax[A](val a: A) extends AnyVal {
  def show(implicit show: Show[A]): String =

def addAndShow[A: Monoid: Show](a1: A, a2: A): String =
  (a1 |+| a2).show

In such cases where extension methods are used to provide a consistent type-class-relates syntax, objects and classes that provide it are named, well, Syntax.

Typed Tagless Final Interpreter

If you looked at addAndShow[A: Monoid: Show] and started wonder if the whole program could be expressed like that, the answer is: yes, it’s known as typed tagless final interpreter.

import cats._, cats.syntax.all._
import io.circe._, io.circe.syntax._
import io.circe.parser._
import monix.eval.Task,

final case class User(name: String, surname: String, email: String)

trait UserRepo {
  def fetchByEmail(email: String): Option[User]
  def save(user: User): Unit

class UserServices[F[_]: Monad](repo: UserRepo) {
  def parseJson(user: String): F[Option[User]] = Monad[F] => decode[User](user).toOption)
  def asJson(user: User): F[String] = Monad[F] => user.asJson.noSpaces)
  def fetchByEmail(email: String): F[Option[User]] = Monad[F] => repo.fetchByEmail(email))
  def save(user: User): F[Unit] = Monad[F] =>

class UserRepoInMemory extends UserRepo {
  private val users = scala.collection.mutable.Map.empty[String, User]
  def fetchByEmail(email: String): Option[User] = users.get(email)
  def save(user: User): Unit = users += ( -> user)

class Program[F[_]: Monad](userServices: UserServices[F]) {

  def store(json: String): F[Unit] = for {
    parsed <- userServices.parseJson(json)
    _ <-[F].point(()))
  } yield ()

  def retrieve(email: String): F[String] = for {
    userOpt <- userServices.fetchByEmail(email)
    json <- _).getOrElse(Monad[F].point(Json.obj().noSpaces))
  } yield json

val userRepo: UserRepo = new UserRepoInMemory
val userServices = new UserServices[Task](userRepo)
val program = new Program[Task](userServices)"""{"name":"John","surname":"Smith","email":""}""").flatMap { _ =>

This is a variation of a showoff code I wrote one day. The original version also used some experimental library which defined implicits in companion objects and in the end run program twice: one time as Id (returning values immediately) and one time as Task executing it asynchronously.

In the name Typed Tagless Final Interpreter:

  • interpreter refers to the fact that once we declare context bounds, the operations will be run by something external (a type class). This something becomes an interpreter of the code defined by our methods,
  • typed - at each point operations are typed and we don’t have to perform any sort of additional checking to ensure, that we are allowed to do what we do. Type classes provide allowed, typed set of operations, while extension methods let them we written down in a readable form,
  • tagless - refers to the fact, that we can have more than one implementation, and we don’t need to decide which to use using tags we could pattern match on, neither in form of some value we compare, nor tagged union,
  • final - refers to the fact, that we don’t operate on values when defining the interface interpreter would work on. If we used values directly and pattern match on it ourselves we would use the initial encoding, but if instead, we declare a set of functions that has to be implemented together (e.g. as a type class) - the final encoding. Then interpreter provides the implementation for these functions.

The goal of such architecture is decoupling the way you run your code (Try, Future, Task, IO, …) from the actual domain logic. Other solution for such problem (avoiding commitment to some monad early) are free monads, however, they create overhead to do the need of creating an intermediate representation which will be interpreted into the final computation. It is even possible to optimize TTFI.

Magnet pattern

Any list of interesting things we can do with implicits cannot be completed without magnet pattern. A magnet pattern is an alternative to method overload which might return different result types depending on input, where, instead of providing many method implementations, we provide one argument which decides the result type. This argument is called the magnet and it is created by an implicit conversion from some other type:

sealed trait Magnet {
  type Result
  def get(): Result

object Magnet {
  implicit def fromNullary[T](f: () => T)(implicit ec: ExecutionContext) =
    new Magnet {
      type Result = Future[T]
      def get() = Future { f() }
  implicit def fromList[T](list: List[T]) =
    new Magnet {
      type Result = String
      def get() = list.mkString(", ")
def withMagnet(magnet: Magnet): magnet.Result = magnet.get()

withMagnet(() => Set(1,2,3)) // Future(Set(1,2,3))

withMagnet(List(1,2,3)) // "1, 2, 3"

This is quite a powerful pattern, but not without its issues. If Magnet trait is not sealed it might be extended with new ways of implicitly converting argument into a Magnet. As such debugging errors becomes a real issue, as you have to guess why implicit conversion failed. Was it not imported? Was implicit ambiguous? Was some other implicit missing?

This pattern was popularized by Spray with blog post about its internal DSL, which also explains the rationale behind introducing it. It carried over to Akka HTTP, but I saw it also in other libraries e.g. sttp.

Functional dependencies

Magnet pattern is closely related to functional dependencies. In general, it states that some type parameters depends on other type parameters. Magnet pattern would be an example of functional dependencies, where result type of an operation depends on types of its operands (which are parametric).

It is more generic term than magnet pattern though. In Scala CanBuildFrom[From, Element, To] can, for instance, make sure that if you turn Map into Set, the Set will contain pairs.

Map(1 -> 2, 3 -> 4).to[List] 
// List[(Int, Int)] = List((1, 2), (3, 4))

In this case not only does returned type depends on the argument but - inside CanBuildFrom - types depend on one another. You can expect, that functional dependencies will appear in the context of implicit evidence and type classes.


In this posts, we saw that implicit conversions are potentially very dangerous. We also learned, that they are very powerful and without them, some great patterns would be impossible to implement.

Whether it’s Akka, Typelevel or Scalaz ecosystem, Scala would not be what it is today if it implicits weren’t there.

The last thing we need to cover is how implicits work, and what can we do debug them.