Wednesday, November 11, 2009

My thoughts on Go (Google's new language)

In the last couple of days I've been looking at Google's new language. Here is kind of a brain dump of my thinking on it.

First of all anything new produced by people of this caliber needs to be taken seriously. It is not done and may fizzle, but it shouldn't be dismissed out of hand. In that spirit here is a high level overview.

That said, it seems to be a mix of good things, interesting things, and things that are oddly missing. Looking at code examples it is in the C family. Since they want to appeal to systems programmers they are focused on simplicity, compilation time, and run-time performance. It has good stories to tell on all three, but I'm someone who finds scripting languages acceptable for the latter two, and that means I have a pretty high tolerance for complexity.

At a higher level they have garbage collection, first class functions and closures. Given the audience that they are aiming for they have downplayed the latter two (I spent the better part of an hour trying to verify that before finding it buried in the spec. Those are cool. They have goto, but looking through the spec I see that they have labeled loop control. There is an old theorem that any flow of control that can be achieved with goto can be achieved with equal efficiency with labeled loop control. Therefore I suspect that goto won't get used much. (Though it does make sense for state machines.)

Those are some basics, but they have three key features that they think set their language apart from most offerings out there.

The first is goroutines. You can take any function foo and type in go foo(arguments);. This basically means, "You go away and do foo while I do other stuff. When you return you exit." This is a very simple yet powerful concurrency model. The programmer does not know or care whether the goroutine executes in the current thread or another thread. In fact if the language had proper support for it then you'd be fine with it executing in another process on another machine.

Of course telling goroutines to do work in the background isn't very useful unless you can communicate between them. Their solution to that is their second key idea, channels. A channel is something like a Unix pipe. It is just a line of communication you can pass messages along, and data synchronizes by blocking until it is read. Channels may be one way or two way, and a given channel may only send specific types of data. They provide abstraction because you don't need to know the details of what is on the other end of the channel. This makes simple services very easy to write. For instance they offer this example of a simple RPC server:
  func startServer(op binOp) (service chan *request, quit chan bool) {
service = make(chan *request);
quit = make(chan bool);
go server(op, service, quit);
return service, quit;

The first two ideas are not that surprising given the history of the people who came up with this. But the third is a little more surprising. The third idea they call interfaces. I actually dislike the name, I'd prefer to call it mixins instead. The idea is that an interface is defined to be any type that supports a given set of methods. You can then pass that object in anywhere where that interface is expected. You can also add methods to the interface, that are automatically available to objects that support that interface.

In short it is very similar to a Ruby mixin, except that you don't have to declare that you're importing the mixin, it autodetects that you fit the rules and does it for you.

OK, if that is what it has, what doesn't it have?

Well, libraries. Given that it was released yesterday, that is understandable. :-)

A bigger lack is exceptions. I understand this decision. The problem is that they envision writing servers with a micro-kernel design. But what happens when one of the services breaks down? Who can catch that error? The code that launched the service? The code that is communicating with it through channels? If there is no good answer, then what sense does it make to let a service just go down? If they do add exceptions they have hard design problems to solve which arise from the fact that you don't have a simple stack-based flow of control.

The much bigger omission is generics. The FAQ says that generics are missing because they introduce complexity in the type system and run time. Obviously so, but they are well worth it.

People who come from dynamic languages (eg Perl, JavaScript, Ruby, Python...) might well wonder what "generics" means. Here is a quick explanation. We have higher order functions. What if we wanted to implement grep? Well we could fairly easily write a function that takes a function mapping int to bool and returns a pair of channels where when you stick something in the one channel, it pops out the other if and only if the function returned true. We could write a similar one for Strings. And so on and so forth.

But you have to write one function per type you want to grep on! This is highly annoying. The functions all have identical form except different types. But you have to write the same function over and over again. As a result there is no way to avoid repetition of certain kinds of boring code. Which is why, when asked whether Go has libraries allowing functional constructs like map, reduce, scan, filter, etc the response was, The type system rather preludes them. They would either have to use reflection (slow) or be specialised for each type of slice. (See this thread for that exchange.)

If you wish to solve this you need to do one of three things. The first is to have a type system that is rich enough to handle meta-functions like this. (For instance Haskell.) The second is to have types be figured out and matched at run-time, which is what dynamic languages do. And the third is to implement generics.


OK, enough of an overview. What are my thoughts?

Well first I'll watch it. It is young and nobody knows what it will do.

Moving on I hope the Go community that springs up start thinking about designing around the idea of capability based systems. That is a strategy of handling security/access by handing out opaque tokens that you need to make specific kinds of requests. If you have the token then you have access. If you don't, then you don't have permission and have no way to make the request. See this introduction for an explanation of how powerful this idea is. And in Go you'd realize it by having the "opaque tokens" be channels to services that do whatever you need done.

On a somewhat negative note I have mixed feelings about interfaces. My comment on mixins is, "It is bad unless you do it a lot, and then it is good." That is because a mixin involves magic at a distance that adds stuff to your class. So using a mixin requires learning exactly how it works. That mental overhead pays off if you get to reuse it over and over again. But if you don't use it heavily, it is a source of misunderstanding and bugs. The interface idea has the same issues, with more magic, but without the huge bug of overwriting methods you implemented in a class. (That said, I'm sure that more than a few people will be wondering why they didn't get the method they wanted from interface A when it conflicts with interface B that you also match.)

On a negative note I predict that there will be a lot of bugs in real Go systems around issues like deadlocks between services. Concurrency always opens up the potential for hard to diagnose bugs. And knowing that, I hope they develop good news for detecting/debugging those issues.

And finally I think that the omission of generics is a mistake. If they want the language to grow up, they'll need to fix that. Exceptions, if done well, would be a good addition, but there are reasons to leave it out. But generics are just too big an issue.


Jason said...

1) If you watch the Tech Talk, it is clear that they want to add generics, they just haven't figured out a good way to do it yet

2) They do have deadlock detection. I don't know the details, but I've gotten a "deadlock detected" message from the runtime when messing around with channels.

jorjun said...

I suspect Go designers wanted to keep the language as similar to Kernighan as poss. so that they could facilitate the upgrade of a heap of prior-art that is already out there (most importantly the Linux kernel).

Ideally such an upgrade would mainly involve stripping out artefacts, such as garbage collection and then bingo you have a burgeoning Go asset-base which is basically refined by the process.

ephelon said...

Mixins are an implementation of some methods, whereas interfaces are a contract to implement a particular set of methods.

The novel feature of interfaces in Go is that you are not required to specify them explicitly on your types. It's this idea that erases the type heirarchy that becomes so much of a burden in OO centric designs.

btilly said...

@Jason: From my viewing of the tech talk it was clear to me that they know that people want generics, but I didn't get the sense that they were sold on providing them yet.

The existence of deadlock detection is good. I'll be interested to see what happens with that.

@ephelon: A mixin isn't just the implementation of some methods. A mixin is a contract that it will implement some methods for you if you implement the API it expects. By contrast a Java interface is a contract enforced at compile time that you implement a given API, but it can't implement any standard behavior based on that API.

Go's interfaces are a mix of those two behaviors. To be precise it is a compile time enforced contract that you implement a specified API that can also implement standard behavior on top of that. But when people start using them, I suspect the design patterns that emerge will be very reminiscent of what people do with mixins.

jkp said...

Re generics:

I've been wondering why interfaces can't service to fill this gap in some way?

From my understanding when I watched the intro, you can assign anything that implements an interface to a variable typed to that interface. If I wanted to emulate a c++ template (my reading of generics) that could operate on any type I would have to make sure my types satisfied the template requirements.

Likewise, can I not make my types conform to an interface and then have a function which is declared to accept that interface as a paramter (bad phaasing, accept an interface? accept an implementaation of the interface? You get the idea...) and work with it in the same "generic" way?

Just a thought...

btilly said...

@jkp: The challenge is that the function definitions that define an interface are all typed. Which means you can't specialize them for your specific type. So, for instance, I can't create a function that takes a function that turns foos into bools, and a channel that passes foos, and then return a channel that passes back the foos that are true.

They get around this by putting methods on the collection. For instance look at the documentation. For a collection to be sortable you have to define Len(), Less(i, j int) bool, and Swap(i, j int). Because those function signatures are in terms of the index of the collection, they are able to handle collections with arbitrary types of data. However if you wanted, for instance, Less(T, T) to be a binary operation on the underlying data type T then you'd need to write a completely boilerplate function to implement Less on the collection.

But it has a bigger limitation. It works for arrays. But when I look at a channel I think, "infinite streams of data". But you don't access a channel by index, so you can't hide the details of the type you're working with behind an integer. So if the collection I'm working with is a stream, the strategy breaks down.

Moving on and restricting to arrays, if you wanted to follow this strategy for implementing grep, you would need (among other things) to insist on being passed a function that turns an int into a bool. Note that there is no way to specify that the function takes an int and the collection, because the arbitrary collection type can't appear in the interface definition. To produce that function you'd need to write a boilerplate function like this (untested):

func bindCollection (f func (myType) bool, c myCollectionType) (curriedF func (i) bool) {
    curriedF = func (i) bool {

Once you've added in this boilerplate per type, and then add in the currying call to every attempt to use grep, the pain of using these techniques is to the point of being unwieldy.

That said, it looks to me like the addition of parametrized interfaces would be enough to meet the majority of the desire for generic programming. With that you could do things like write a generic filtering function that lets you take a function that converts type T into bool, and an input channel that provides elements of type T, and create an output channel that provides elements of type T. And then you could insert a grep on any channel you wanted without writing the obvious boilerplate code.

Alain said...


Thanks for this review. Very interesting language. I for one wish it success.

Go seems very small, and very simple. There's no cruft or redundancy. If you want a C-like language with some basic object-oriented features and garbage collection, Go is about as simple as it gets.

I agree with you on the decision to not include generics (being a bad decision that is). Everything in Go is passed by value - except for the built-in slice and map types, which are passed by reference. Everything is allocated by "new" - except for the built-in slice and map types, which are allocated by "make". Strange decision...

Very promising language.

groverdflorian said...