First of all anything new produced by people of this caliber needs to be taken seriously. It is not done and may fizzle, but it shouldn't be dismissed out of hand. In that spirit here is a high level overview.
That said, it seems to be a mix of good things, interesting things, and things that are oddly missing. Looking at code examples it is in the C family. Since they want to appeal to systems programmers they are focused on simplicity, compilation time, and run-time performance. It has good stories to tell on all three, but I'm someone who finds scripting languages acceptable for the latter two, and that means I have a pretty high tolerance for complexity.
At a higher level they have garbage collection, first class functions and closures. Given the audience that they are aiming for they have downplayed the latter two (I spent the better part of an hour trying to verify that before finding it buried in the spec. Those are cool. They have goto, but looking through the spec I see that they have labeled loop control. There is an old theorem that any flow of control that can be achieved with goto can be achieved with equal efficiency with labeled loop control. Therefore I suspect that goto won't get used much. (Though it does make sense for state machines.)
Those are some basics, but they have three key features that they think set their language apart from most offerings out there.
The first is goroutines. You can take any function
foo
and type in go foo(arguments);
. This basically means, "You go away and do foo while I do other stuff. When you return you exit." This is a very simple yet powerful concurrency model. The programmer does not know or care whether the goroutine executes in the current thread or another thread. In fact if the language had proper support for it then you'd be fine with it executing in another process on another machine.Of course telling goroutines to do work in the background isn't very useful unless you can communicate between them. Their solution to that is their second key idea, channels. A channel is something like a Unix pipe. It is just a line of communication you can pass messages along, and data synchronizes by blocking until it is read. Channels may be one way or two way, and a given channel may only send specific types of data. They provide abstraction because you don't need to know the details of what is on the other end of the channel. This makes simple services very easy to write. For instance they offer this example of a simple RPC server:
func startServer(op binOp) (service chan *request, quit chan bool) {
service = make(chan *request);
quit = make(chan bool);
go server(op, service, quit);
return service, quit;
}
The first two ideas are not that surprising given the history of the people who came up with this. But the third is a little more surprising. The third idea they call interfaces. I actually dislike the name, I'd prefer to call it mixins instead. The idea is that an interface is defined to be any type that supports a given set of methods. You can then pass that object in anywhere where that interface is expected. You can also add methods to the interface, that are automatically available to objects that support that interface.
In short it is very similar to a Ruby mixin, except that you don't have to declare that you're importing the mixin, it autodetects that you fit the rules and does it for you.
OK, if that is what it has, what doesn't it have?
Well, libraries. Given that it was released yesterday, that is understandable. :-)
A bigger lack is exceptions. I understand this decision. The problem is that they envision writing servers with a micro-kernel design. But what happens when one of the services breaks down? Who can catch that error? The code that launched the service? The code that is communicating with it through channels? If there is no good answer, then what sense does it make to let a service just go down? If they do add exceptions they have hard design problems to solve which arise from the fact that you don't have a simple stack-based flow of control.
The much bigger omission is generics. The FAQ says that generics are missing because they introduce complexity in the type system and run time. Obviously so, but they are well worth it.
People who come from dynamic languages (eg Perl, JavaScript, Ruby, Python...) might well wonder what "generics" means. Here is a quick explanation. We have higher order functions. What if we wanted to implement grep? Well we could fairly easily write a function that takes a function mapping int to bool and returns a pair of channels where when you stick something in the one channel, it pops out the other if and only if the function returned true. We could write a similar one for Strings. And so on and so forth.
But you have to write one function per type you want to grep on! This is highly annoying. The functions all have identical form except different types. But you have to write the same function over and over again. As a result there is no way to avoid repetition of certain kinds of boring code. Which is why, when asked whether Go has libraries allowing functional constructs like map, reduce, scan, filter, etc the response was, The type system rather preludes them. They would either have to use reflection (slow) or be specialised for each type of slice. (See this thread for that exchange.)
If you wish to solve this you need to do one of three things. The first is to have a type system that is rich enough to handle meta-functions like this. (For instance Haskell.) The second is to have types be figured out and matched at run-time, which is what dynamic languages do. And the third is to implement generics.
-----
OK, enough of an overview. What are my thoughts?
Well first I'll watch it. It is young and nobody knows what it will do.
Moving on I hope the Go community that springs up start thinking about designing around the idea of capability based systems. That is a strategy of handling security/access by handing out opaque tokens that you need to make specific kinds of requests. If you have the token then you have access. If you don't, then you don't have permission and have no way to make the request. See this introduction for an explanation of how powerful this idea is. And in Go you'd realize it by having the "opaque tokens" be channels to services that do whatever you need done.
On a somewhat negative note I have mixed feelings about interfaces. My comment on mixins is, "It is bad unless you do it a lot, and then it is good." That is because a mixin involves magic at a distance that adds stuff to your class. So using a mixin requires learning exactly how it works. That mental overhead pays off if you get to reuse it over and over again. But if you don't use it heavily, it is a source of misunderstanding and bugs. The interface idea has the same issues, with more magic, but without the huge bug of overwriting methods you implemented in a class. (That said, I'm sure that more than a few people will be wondering why they didn't get the method they wanted from interface A when it conflicts with interface B that you also match.)
On a negative note I predict that there will be a lot of bugs in real Go systems around issues like deadlocks between services. Concurrency always opens up the potential for hard to diagnose bugs. And knowing that, I hope they develop good news for detecting/debugging those issues.
And finally I think that the omission of generics is a mistake. If they want the language to grow up, they'll need to fix that. Exceptions, if done well, would be a good addition, but there are reasons to leave it out. But generics are just too big an issue.