# kubuszok.com

Personally just a "developer" without X in front of it, currently working wth Scala.

I enjoy learning new things, especially more abstract like mathematics or algorithmics.

When I browse Reddit, read Hacker News comments or google for Scala-related topics I sometimes find some misconceptions. These are not about: what is monad or how to start using Cats, as people asking this kind of questions already have some basic knowledge about what they want to learn.

What some folks find confusing, are questions like what is Shapeless for? what’s the difference between Cats and Scalaz? is Scala dying? who is Typelevel? do I need SBT? So, here I gathered some questions, that people validly ask and not always get a good answer. So here I wrote them, from a POV of a common member of the community that doesn’t belong to any inner circle. The list is non-exhaustive.

#### Misconceptions and questions

At some point, certain people started spreading the news that Scala is dying. Besides standard FUD similar to Java is dying phenomenon it originated in a change of main Scala contributor’s name from Typesafe to Lightbend. At the same time, Lightbend decided to extend Java support by releasing some technology stack for Java first.

So, some people concluded, that Scala is in so bad shape that its creators needed to change their target to Java shops. However, that is not necessarily true. Indeed, Typesafe had some trouble convincing companies to their stack when it was Scala only, but with support for both Java and Scala they make it easier for a client to make a gradual transition from one stack to the other - which is a quite valid business decision. Meanwhile, Scala receives as much love and care as ever if not more than ever.

And the change of a name was… well, pure marketing decision. Some people beat it to their skulls that Typesafe = not Java, so by changing the name they could make the first impression again.

There is also a thing about TIOBE rankings and GitHub repos. There Scala is quite far from the top. As the rest of more-than-superficially functional languages if you think about it. FP got into the mainstream few years ago and while it gains popularity in some industries faster (e.g. fintech), less data-heavy applications will take more time to switch.

### What is Dotty? Are Scala’s authors forsaking it?

At some point, I saw a FUD where people claimed, that Scala is being forsaken by its author - Martin Odersky - as he moved on to Dotty. And do the language is awaiting inevitable doom.

Except that Dotty - as mentioned on official page - is codename for Scala 3.0, where ATM experiments happen and future directions of designing the language are being tested. They still have some things to do, but it already looks promising.

So basically Scala is not being forsaken because of Dotty, Dotty will become next Scala. And some solutions from Dotty are being backported to 2.0 to make transitions smoother.

### Is Scala a functional language?

I was once at a meetup, where collective decided that the name functional language is being overused and, so they will redefine it. The current (legitimate) definition is:

functional language - a language which treat a function as a first-class citizen

which mean, that you can pass and/or return a function from another function. Since a lot of languages adopted lambdas recently (and a pointer to function also counts), basically all popular languages can be considered functional. Including Scala.

What people at the meetup meant when they discussed functionality of a language was

purely functional language - a functional language, where all functions are pure (have no side effects)

Translating into English, you function cannot change the World outside it: print, read input, connect to the internet, access disc or hurt your kitty. It can only take whatever values you passed to it, derive some information from it and maybe return it. (Having an inner counter might also be considered affecting the World, as we conclude that this data would have to be stored somewhere and by changing the counter you would affect the world. On the other hand, having constants and calling other pure functions is still being pure). Scala doesn’t enforce that (and since it works on JVM it’s virtually impossible). Odersky considers adding effect system for Dotty which would help indicating, that function is effectful, but that is just an idea, and it gives you no guarantee that some Java library won’t e.g. query database.

So, is it worth it?

Those 2 concepts are also different from what people who call themselves functional programmers (sorry, I couldn’t find a better name to distinct). For instance in Java you can write 10 functions and pass them between some objects, that would use them to mutate their state. Or have functions, that perform side effects. It is enough to consider Java functional language, but many programmers would argue about calling that functional programming. Functional programming is often related with concepts present in functional languages and forced by purely functional languages:

• data immutability
• pure functions
• referential transparency
• function composition
• algebraic data types
• declarative style
• separation of side effect from the pure part (usually by IO monad)
• heavy usage of functions like map, flatMap, foldLeft, foldRight, reduce - names may differ depending on a language.

All of that is possible with Scala. As well as other things as Scala is a marriage of FP and OOP.

Summarizing, Scala is a functional language, but not purely functional. It allows functional programming, but not enforces it, because it is mixed-paradigm language, that allows usage of both FP and OOP, declarative and imperative style.

### Is Scala hard?

(Usually) only as hard as you make it.

Scala has a variety of features, that put together might be overburdening to a newcomer. However standard library does its best in order to avoid forcing a programmer to use them (and sometimes fails, but beginner probably won’t run to most of cases [her/him]self).

Features, that make people feel that Scala is hard are:

• implicits - there are at least 3 different meaning of this word, and all of them should not be of concern to a newcomer. Once you get some confidence… you will still have little use for them. Their true potential lies in deriving/passing type-classes and pimp my library pattern. As API user you will just import some stuff and shit work! On the other hand as a library creator…

• variance: invariance, covariance, contravariance - weird +/- signs next to types. You really don’t bother with them until you want to write, a parametrized class/function, where you can tell compiler, that if Cat extends Pet then Owner[Cat] can be treated as Owner[Pet] and if you have a function, that works on Owner[Pet] it sure as hell can work on Owner[Cat]. Sounds obvious? Then, try to pull this in C++ or Java and see funny thing happen…

• type-level programming - let’s be honest. That stuff is a godsend if you are writing a library, even internal one. But for a simple program, it would be over-engeeniering 10 times out of 10. If you are thinking about a library, that would take a recipe for a limited number of cases (e.g. how to turn String/Int/Double to JSON), and then create a generic recipe for any case class composed of them, or of cases classes composed of… then you need it. If you want to make sure in compile time that your vector is of size of 10, use it. Most of the time - someone else already used it in some library, where you just add some import and magic happens. It can wait, till you feel confident with basics and curiosity push you forward

• macros - YAGNI. Do you really need to modify the source code of your program during compilation? Cannot you implement that method yourself, need the compiler do it for you? Most likely not.

…I mean at some point, you will make use of them, but the amount of people writing macros is too damn high. Perhaps by that time scalameta with mature and you won’t have to learn compiler’s internal AST in order to generate some boilerplate

• encouragement of functional programming - many people come from the object-oriented background. FP is simply alien to them. Functional programmers wouldn’t consider Scala hard - annoying, sure, as it is not [Haskell/Idris/OCaml/whatever language they pray to ATM], but not hard. Kotlin or Go are not so inclined towards FP (as far as I can tell), so people coming from Java or .Net don’t feel so lost. They still use OOP, just with more functions passing around. So they might don’t like that:

• immutable data structures are the default
• immutable variables and while loop are discouraged
• you should combine functions instead of creating a block where you call methods
• separations of concerns are made by rather by decoupling data from behavior than by abstracting interface from implementation
• separating domain logic from infrastructure is done by concepts like free monads or typed tagless final interpreters
• object-oriented design patterns they used for last 15 years are quite often useless and they need to learn new ones with mathematical names

Explaining why it makes sense is a rather long article, that fortunately someone already written.

So Scala will get complex only as much, as much you start using more advanced concepts altogether. But you don’t have to. As a matter of the fact, you can open IntelliJ Idea, install Scala plugin, copy-paste code from .java to .scala and it will convert code will little to no issues. I landed a job after a month of Scala, FFS, and at a time I understood what map, flatMap, and filter are. It was enough to do my job and do it well. (I learned all the advanced stuff later on, especially working on libraries, so I can do my job better but still, you can release a decent product without writing a single macro or usage of a FP-oriented library).

People that converted to Scala long time ago saw benefits in those features, use them heavily and it saves them time, but under no circumstances, beginner should be forced to use them. Trying to run before walking is a reason, why so many people fail.

It’s similar to Java and Java EE - if you played around with Java SE, it is a simple language with a decent performance and outstanding ecosystem and tooling. But if a month into Java you are thrown into legacy Java EE project, you end up believing that Java is slow and fucked up.

### Is Scala just a Java without semicolons?

No. See above. You CAN programming Scala as if it was Java with different syntax. But you can program in it as if it was Haskell without IO monad (an apparent goal of Scalaz creators).

Scala believes you know what you do, and lets you do things in your way.

I once heard from a certain Go programmer, that Scala is academic (as opposed to Go, which is pragmatic). What does make a language pragmatic or academic? (I mean, other than marketing, buzzword, and FUD).

I assume, that pragmatic language helps you deliver working solution and academic language appeals to some idealistic code purity criteria, that have no value to the business.

That is… still bullshit. With type checks, I can deliver faster as I am made aware of many errors even before I run the code. I can tell by boundary checks, how some function is intended to use, so I have an additional documentation (next to plain input/output type and function/parameter names), that is always actual - if it ceases to be code no longer compiler. I can write generic code or very specific. I can use all JVM optimizations to my advantage… right after I design clean, readable solution.

But my opinions mean little. Business, on the other hand, decided, that Scala works well for their microservices, big data, and DDD approach. They wouldn’t pay so much in a random academic bullshit which couldn’t bring them profit.

### I heard Scala has poor IDE support - is it true?

There are posts from several years ago, that are still scaring people off.

IntelliJ Idea works great with Scala. Compiler for newer versions of Scala support Microsoft’s Language Server Protocol, and through it Sublime Text or Visual Studio Code provides intellisense, goto definition etc as well.

### I heard that companies went down with Scala - is it unmaintainable?

We need to admit there is a problem with Scala - it is really broad language and it has no strong opinions on how you should use it. It encourages you to use immutable data structures and for-comprehension, but that’s it.

So a team composed of newcomers and enthusiasts without prior experience with Scala will be prone to

• not defining their own conventions early on,
• deferring to over-enthusiastic team members that want to use some more advanced concepts everywhere.

Additionally, some old blog posts with… not the best solutions are still around (cake-pattern?), and they are countered with posts focused on overhyped concepts with a relatively young adoption (free monads, typed tagless final interpreters).

In such case, it is easy to paint yourself into the corner. However, even few experiences devs guiding the choice of concepts, libraries, conventions can direct the project into something that could be maintained for years.

### Is Java interoperability a myth?

Well, Java has its limitations. Scala wants some features, that Java did not have, and it implemented them one way or the other. As a result:

• Scala is mostly a superset of Java
• everything from Java should work in Scala
• features that are Scala specific would probably not work (easily).

So, if you (consciously) limit yourself to:

• using traits like interfaces: no self-types, only public declarations, etc - it should work as an interface with Java (with Scala 2.12 and Java 8, traits with some implementations should compile down to interfaces with a default implementation)
• classes and methods with dumb declarations: no funny stuff with types, no implicits (including implicit evidence), using only types that would be available in Java, no currying - then it should also work
• not using objects.

Of course, one could actually try to use those features… as long as names like org.myclass.fun$lamda1$lambdaF$randomStuff or my.singleton.Example$ are something once could put up with.

So, Scala is interoperable with Java in that it can use any of Java code, and it can be made to be used by Java where necessary. We can expect, that things get better with Dotty, but full interoperability is perceived as a restriction, not a feature, so I wouldn’t hope for it to be ever implemented.

### What is SBT?

Formerly acronym for Simple Build Tools, since for a while no one really believes that lie anymore it was changed to just sbt (with small cases). But out of habit I still write it as SBT.

It is a build tool used virtually only for Scala. It is also the only build tool that has some reasonable support for Scala.

I might one day write a blog post about, how to start using it as the official documentation consists of series of text walls.

### What is Akka? Do I need it? What do I use it for?

When you see it on a job offer or as a question during an interview, it most likely means Akka Actor library. Akka, however, should be understood as an ecosystem grown around it.

Akka Actor is an implementation of an actor model. Is gained popularity as it allows modeling domain with distributed actors, that exchange messages.

On top of it, Akka Streams were build - it is an implementation of Reactive Streams, which let you model your processes as a pipeline of data transformations. Its major strength is backpressure - build-in mechanism, which allows adapting the speed of all the parts, so that none of them get overflowed by arriving content. Also memory friendly - properly used reactive streams will have more or less constant memory usage and won’t blow up with OutOfMemoryException

Alpakka is a collection of librarians providing commercial integrations for Akka Streams (like e.g. AWS S3) - so you can e.g. stream a file from users, transform it, and post the results to S3 bucket. With no temporary files occupying your disc space.

Akka HTTP is a library for writing HTTP servers. Since it uses Streams underneath it should handle big workloads pretty well without much changes to memory usage.

Additionally, there is Akka Cluster, Akka Persistence, Akka Typed…

Many companies adopted Scala just because of Akka Actors, so having a knowledge about them highly increases the chance of being hired. However, newer projects gradually shift from Akka Actors towards Akka Streams as they are a higher level of abstraction, that has some common problems (error handling, backpressure, clarity of intents) already solved. (Also, most of the time actors are a too low level of abstraction leading to over-engineered solutions).

As for Akka HTTP, it is used underneath by Play Framework, and it replaced Spray.io - previous framework for building RESTful APIs. So, as a lighter alternative to Play, it is something useful to people developing HTTP servers with Scala.

### Why there are Cats and Scalaz libraries? Isn’t Scala already a functional language?

First of all - Scala is a functional language, all right. Its standard library supports its functional features a lot. What it lacks are more advanced functional concepts implemented in an opinionated way.

So, first such implementation, that was intended to serve as Scala’s standard library for very-very functional programs was Scalaz. Looking at the syntax it used to promote, it is quite clear that someone missed Haskell and so he decided to reimplement it within Scala. Syntax including.

It didn’t exactly played out - Haskell fan(atics) complained that it is not Haskell enough, and sane people complained, that additional redundant parallel syntax was promoted. This and few other issues (read: one of the main contributors felt very Torvalds about the project, and expressed it even more lively that Linus on Linux dev mailing lists), so some part of the community decided to take some steps.

The result was a major shitstorm. Said contributor left the community (read: was kicked out), community split, and decided to recreate Scalaz anew, with openness in mind. And so they started the Cats library.

### What is the difference between Cats and Scalaz? Which should I choose?

In the beginning, it was hardly any difference - in places where Cats had some implementation, that is. Writing thing from scratch took some time, so sometimes implementations were almost identical, you could just swap imports and things would work the same (I surely had this case with free monads).

Over the time Cats gained their own identity - it decided to invest in good documentation, they do not try to introduce new syntax unless necessary, and was the first of the two to provide some modularity (Scalaz for a long time encouraged import-all approach). Price for that was often changed to the interface, and so for a while, the library was considered immature. With the recent release of 1.0.0 backward compatibility is something that we should expect.

Scalaz on the other hand for a long time stuck to its old ways. Among hardcore FP fans (check any discussion involving John de Goes on Twitter) it is considered superior and more reliable. However, with some recent versions, they decided to introduce some changes, like modularity or redesign of their IO-focused components.

Around both libraries arouse small ecosystems. Some library creators wanted to support both of ecosystems at once, but due to effort, they are giving up. Instead, they stick to one of them expecting their users to use libraries like Shims or Harmony.

Personally, I’d say that the choice is for the most part purely preference-based. The biggest differences lie in IO-related libraries, but again, there are interop libraries, so one can write the whole app using Cats and some fragment with Scalaz’s fs2. Probably, if read about IO in both ecosystems, and cannot tell which sounds better, you don’t have a case where the choice of a library would make a huge difference.

### What is Typelevel?

After Scalaz-related shitstorm, somewhere around when Cats were created, people responsible for the initiative decided they need the Code of Conduct. And to encourage sticking to it, they decided that they would cooperate closer with projects, that also follows their Code of Conduct. So they established Typelevel.

As far as I can tell, it is not strictly speaking organization. It’s a bunch of individuals that share some values (they boil down to don’t be a jerk) and they express that by having the same CoC for their projects.

So you have Typelevel family of projects, Typelevel Scala compiler and Typelevel website with blog and documentation.

Some projects: Cats, Shapeless, Circe, Typelevel Scala.

### Do I need to choose between Akka, Cats, and Scalaz?

I would say, that Akka is completely orthogonal to Cats/Scala.

As for Cats and Scalaz - using them at once gives little to no benefit, so it would be better to stick to one of them, and use Shims or Harmony to communicate with libraries that made a different choice.

Besides. Many libraries decided to stick to one of them but did it internally, so on API level, you don’t even notice, that some FP library is incorporated. With bigger projects you might discover, that different parts of your project rely on one of them, while you yourself used only one. Or none.

### Is Shapeless useful? I found no info what it’s really for

Let’s face it - shapeless README sucks at explaining what the library really does. So, instead, let’s look at some quick example.

I have some data:

case class Stuff(a: String, b: Double, c: Map[String, String])


I also have some recipes how to present it:

trait Show[T] { def show(value: T): String }
object Show {
def apply[T: Show]: Show[T] = implicitly[Show[T]]
implicit val showString: Show[String] = str => str
implicit val showDouble: Show[Double] = dbl => dbl.toString
implicit def showMap[K: Show,V: Show]: Show[Map[K, V]] =
m =>
"{ " + m.map {
case (k,v) => Show[K].show(k) + " -> " + Show[V].show(v)
}.mkString(", ") + " }"
// ...
}


I would like to define the above basic blocks one, and then expect that when I write:

Show[Stuff].show(someStuff)


it will just works with no additional work on my side.

So first, let’s take a stepping stone. If - instead of some already defined structure - we had something like a list, where we could just prepend values and types, we could build our Show class incrementally. As a matter of the fact, such concept exists and is called heterogenous list:

sealed trait HList

sealed trait HNil extends HList {
def ::[T](value: T): T :: HNil = ::(value, HNil)
}
case object HNil extends HNil

case class ::[H, T <: HList](head: H, tail: T) extends HList


And you could build with it some complex product type, as well as type-class for it:

val hlist: String :: Double :: Map[String, String] :: HNil =
"test" :: 0.0 :: Map.empty[String, String] :: HNil

implicit val showHNil: Show[HNil] = _ => ""
implicit def showCons[H: Show, T <: HList: Show]: Show[H :: T] =
cons => Show[H].show(cons.head) + ", " + Show[T].show(cons.tail)

Show[String :: Double :: Map[String, String] :: HNil]
.show(hlist) // "test, 0.0, {}, "


Ok, nice. But we wanted it for Stuff and not some random representation! And that is one of the features shapeless provide:

import shapeless._

implicit val showHNil: Show[HNil] = _ => ""
implicit def showCons[H: Show, T <: HList: Show]: Show[H :: T] =
cons => Show[H].show(cons.head) + ", " + Show[T].show(cons.tail)

implicit def showProduct[S, T <: HList](implicit gen: Generic.Aux[S, T],
show: Show[T]): Show[S] =
s => {
val hlist: T = gen.to(s) // our concrete S to hlist T translation!
val result = show.show(hlist)
result.substring(0, result.length - 2) // drop ", "
}


With that, we can finally write:

Show[Stuff].show(Stuff("test", 0.0, Map.empty))


and compiler creates Show[Stuff] for us. It might seem like a lot of boilerplate, until you figure out, that the basic blocks, as well as the rules for derivation, are usually provided by library creators. So when you use Circe with import io.circe.generic.auto._, we get JSON encoders and decoders for all of our case classes, where whole heavy lifting is done by the compiler!

Of course, Shapeless use cases are not limited to this example, but that should give a clue about what is it used for. If you want to learn more about the whole concept (called generic programming), product and coproduct types, and how its done in Scala, take a look at The Type Astronaut’s Guide to Shapeless.

### Do I need to know category theory in order to use Scala?

Nope.

It starts getting helpful once you dig into Cats and Scalaz, but the majority of projects get away without using any of those libraries and Scala’s standard library certainly doesn’t force you to know anything about categories.

### Do I really need SBT?

No, you don’t.

But as painful as it is, it will be less painful than alternatives, once project will grow slightly bigger.

There are initiatives to improve that situation:

• cbt by Jan Christopher Vogt
• mill by Li Haoyi

Both are trying to implement build as pure Scala program. Time will tell if they gain momentum.

### What are some real Scala issues?

We cannot deny, there are some:

• compile times - with each macro and implicit usage compile-time slightly increases. In a long run, one needs to balance things it he wants to avoid death by a thousand cuts
• SBT as the only reasonable build tool - at a times it requires a lot of digging into the internals, though the alternatives have poorer support for plugins or cross-compilation
• smaller market - if you read job offer with Scala, most of the time you can expect either Akka Actors or Apache Spark. FP-heavy projects are usually more interesting than your-average-crud, but their amount is also smaller. Hopefully, as more companies embrace event sourcing and command-query responsibility separation, things will get better. So far Scala hype = Big Data and Machine Learning
• learning materials - while the number of quality blog posts is rising, many of them is concerned about newest hype (free monads, TTFL), while the number of books about e.g. FP is relatively small. They exist, but one needs to know where to google them and its hard to expect that newcomer that to google for
• there are some flaws with language design that long-time users are aware of as well as flaws in standard collections

Some of them could be resolved over the time (compile times, design flaws, learning materials), some are probably to stay (SBT, market size).