kubuszok.com

Personally just a "developer" without X in front of it, currently working wth Scala.

I enjoy learning new things, especially more abstract like mathematics or algorithmics.

In previous posts, we covered the most basic use cases of implicits. However, to complete the image we not only need to understand how they can provide instances but also how they can transform them. Once we understand that, we can talk a bit about some patterns that combine both implicit parameters and conversions.

Implicit conversions

The most infamous implicit conversion I know is scala.Predef.any2stringadd. It might turn virtually Any object into any2stringadd, which adds + method and allows to concatenate objects like strings. Thing is, usually you want to add to number or collection, but if you messed you types you prefer to fail and know about it. any2stringadd will turn it into String concatenation, so the error you receive will be very far from the mistake you made.

Another issue with implicit conversions I learned is how collections work. Basically, Map is an example of a PartialFunction (you can can call apply and isDefinedAt on it), which in turn extends Function (which should be a total function - the perfect example that blindly following OOP actually hurts!). That means that if you made you map implicit, it will also be treated as an implicit conversion, so Scala will try to convert by key-value lookup.

So, is there any use case for implicit conversions? A reason why they were not removed completely?

Pimp-my-library pattern

Actually, there is. If we only want to create a decorator, which will perform one operation on an object, and then disappear (by either returning a result or an original object), then everything should be fine. Chances of things going wrong will drop even further if we make sure, that our decorator is not overly greedy when it comes to wrapping or when we must import conversion ourselves. (Implicit conversions themselves needs to be enabled either by import or by compiler flag).

class DoubleOps(val double: Double) extends AnyVal {

isZero: Boolean = double == 0
}

implicit def doubleOps(double: Double): DoubleOps = new DoubleOps(double)


This way we are extending the original object with some additional methods, without modifying the original class. Such methods are called extension methods, classes that provide them usually have ExtensionMethods or Ops suffix, while the whole pattern is often referred to as pimp-my-library pattern.

The example above can be shortened using the implicit class syntax:

implicit class DoubleOps(val double: Double) extends AnyVal {

isZero: Boolean = double == 0
}


Implicit classes have some limitations: they cannot be top-level objects, so we cannot put them outside a class/object/trait. If we want to use AnyVal we cannot put them in class/trait either. So, usually, you’ll end up putting them into an object or maybe package object.

This method addition is heavily used by both Cats and Scalaz. For instance for our Monoid and Show type classes:

implicit class MonoidSyntax[A](val a: A) extends AnyVal {

def |+|(a2: A)(implicit monoid: Monoid[A]): A =
monoid.append(a, a2)
}

implicit class ShowSyntax[A](val a: A) extends AnyVal {

def show(implicit show: Show[A]): String = show.show(a)
}

def addAndShow[A: Monoid: Show](a1: A, a2: A): String =
(a1 |+| a2).show


In such cases where extension methods are used to provide a consistent type-class-relates syntax, objects and classes that provide it are named, well, Syntax.

Typed Tagless Final Interpreter

If you looked at addAndShow[A: Monoid: Show] and started wonder if the whole program could be expressed like that, the answer is: yes, it’s known as typed tagless final interpreter.

import cats._, cats.syntax.all._
import io.circe._, io.circe.syntax._
import io.circe.generic.auto._
import io.circe.parser._

final case class User(name: String, surname: String, email: String)

trait UserRepo {
def fetchByEmail(email: String): Option[User]
def save(user: User): Unit
}

def parseJson(user: String): F[Option[User]] = Monad[F].unit.map(_ => decode[User](user).toOption)
def asJson(user: User): F[String] = Monad[F].unit.map(_ => user.asJson.noSpaces)
def fetchByEmail(email: String): F[Option[User]] = Monad[F].unit.map(_ => repo.fetchByEmail(email))
def save(user: User): F[Unit] = Monad[F].unit.map(_ => repo.save(user))
}

class UserRepoInMemory extends UserRepo {
private val users = scala.collection.mutable.Map.empty[String, User]
def fetchByEmail(email: String): Option[User] = users.get(email)
def save(user: User): Unit = users += (user.email -> user)
}

def store(json: String): F[Unit] = for {
parsed <- userServices.parseJson(json)
} yield ()

def retrieve(email: String): F[String] = for {
userOpt <- userServices.fetchByEmail(email)
} yield json
}

val userRepo: UserRepo = new UserRepoInMemory

program.store("""{"name":"John","surname":"Smith","email":"john.smith@mail.com"}""").flatMap { _ =>
program.retrieve("john.smith@mail.com")
}.runAsync.onComplete(println)


This is a variation of a showoff code I wrote one day. The original version also used some experimental library which defined implicits in companion objects and in the end run program twice: one time as Id (returning values immediately) and one time as Task executing it asynchronously.

In the name Typed Tagless Final Interpreter:

• interpreter refers to the fact that once we declare context bounds, the operations will be run by something external (a type class). This something becomes an interpreter of the code defined by our methods,
• typed - at each point operations are typed and we don’t have to perform any sort of additional checking to ensure, that we are allowed to do what we do. Type classes provide allowed, typed set of operations, while extension methods let them we written down in a readable form,
• tagless final refer to the fact that end up with a final result of what in normal interpreter could require some intermediate form (like free monads) where different kinds of different form would have to be distinguished (tagged, e.g. different free monad type constructors: pure, deferred, flatMapped) and pattern-matched.

The goal of such architecture is decoupling the way you run your code (Try, Future, Task, IO, …) from the actual domain logic. Other solution for such problem (avoiding commitment to some monad early) are free monads, however, they create overhead to do the need of creating an intermediate representation which will be interpreted into the final computation. It is even possible to optimize TTFI.

Magnet pattern

Any list of interesting things we can do with implicits cannot be completed without magnet pattern. A magnet pattern is an alternative to method overload which might return different result types depending on input, where, instead of providing many method implementations, we provide one argument which decides the result type. This argument is called the magnet and it is created by an implicit conversion from some other type:

sealed trait Magnet {
type Result

def get(): Result
}

object Magnet {

implicit def fromNullary[T](f: () => T)(implicit ec: ExecutionContext) =
new Magnet {
type Result = Future[T]
def get() = Future { f() }
}

implicit def fromList[T](list: List[T]) =
new Magnet {
type Result = String
def get() = list.mkString(", ")
}
}

def withMagnet(magnet: Magnet): magnet.Result = magnet.get()

import scala.concurrent.ExecutionContext.Implicits.global
withMagnet(() => Set(1,2,3)) // Future(Set(1,2,3))

withMagnet(List(1,2,3)) // "1, 2, 3"


This is quite a powerful pattern, but not without its issues. If Magnet trait is not sealed it might be extended with new ways of implicitly converting argument into a Magnet. As such debugging errors becomes a real issue, as you have to guess why implicit conversion failed. Was it not imported? Was implicit ambiguous? Was some other implicit missing?

This pattern was popularized by Spray with blog post about its internal DSL, which also explains the rationale behind introducing it. It carried over to Akka HTTP, but I saw it also in other libraries e.g. sttp.

Summary

In this posts, we saw that implicit conversions are potentially very dangerous. We also learned, that they are very powerful and without them, some great patterns would be impossible to implement.

Whether it’s Akka, Typelevel or Scalaz ecosystem, Scala would not be what it is today if it implicits weren’t there.

The last thing we need to cover is how implicits work, and what can we do debug them.