Planet Scala

Scala blogs aggregated

October 10, 2014

Functional Jobs

Functional Software Engineer at Cake Solutions Ltd (Full-time)

At Cake Solutions we work with our customers to build high-quality, scalable and resilient software systems using the latest technology. As a software engineer, you'll not only be able to write good, maintainable software, but also stay at the forefront of technology and advocate a principled approach to software engineering. You'll get the opportunity to work on a wide range of interesting projects for our clients, using Java, Scala, Play, and Akka.

What to expect:

To begin with, you will take part in a 2 week Typesafe certified workshop style training course where you will get introduced to Scala, Akka and Play.

You can expect a lively and challenging environment with very interesting problems to solve. We are happy to train and mentor the right people; the important thing is to have a bright mind and the motivation to question, explore and learn. Having published work, on Github or elsewhere, is extremely helpful. CVs should focus on what you really know, not what you've seen once.

If you are a graduate, we are not expecting commercial experience, but we want to see evidence of hobby / university engineering. We do, however, expect you to be independent and to have good understanding of the principles of software engineering.

You will also get flexible working hours and gym membership.

Skills & Requirements As a software engineer at Cake Solutions, you should:

-- Have a good understanding of Java and the JVM. Experience with Scala is a plus.

-- Know how to use UNIX or Linux.

-- Know how to apply object-oriented and functional programming styles to real-world software engineering.

-- Have experience with at least one database system and be aware of the wider database landscape (relational, document, key/value, graph, ...).

-Understand modern software development practices, such as testing, continuous integration and producing maintainable code.

Advantages include:

-- Open-source contributions.

-- Modern web development experience.

-- An understanding of asynchronous and non-blocking principles.

-- Experience in writing multi-threaded software.

-- More detailed knowledge of strongly-typed functional programming (e.g. Scala, Haskell, OCaml).

-- Even more specifically, experience with Akka or Play.

About Cake Solutions Ltd Cake Solutions architects, implements and maintains modern and scalable software, which includes server-side, rich browser applications and mobile development. Alongside the software engineering and delivery, Cake Solutions provides mentoring and training services. Whatever scale of system you ask us to develop, we will deliver the entire solution, not just lines of code. We appreciate the importance of good testing,Continuous Integration and delivery, and DevOps. We motivate, mentor and guide entire teams through modern software engineering. This enables us to deliver not just software, but to transform the way organisations think about and execute software delivery.

The core members of our team are published authors and experienced speakers. Our teams have extensive experience in designing and implementing event-driven, resilient, responsive and scalable systems. We use modern programming languages such as Scala, Java, C++, Objective-C, and JavaScript to implement the systems' components. We make the most of messaging infrastructures, modern DBMSs, and all other services that make up today's enterprise systems. Automated provisioning, testing, integration and delivery allow us to release high quality systems safely and predictably. The mentoring through continuous improvement at all levels of the project work gives our clients the insight and flexibility they expect.

We rely on open source software in our day-to-day development; it gives us access to very high quality code, allows us to make improvements if we need to, and provides access to excellent source of inspiration and talent. We give back to the open source community by contributing to the open source projects we use, and by publishing our own open source projects. The team have contributed various Typesafe Activator templates, and shared their expertise with Akka and Scala in Akka Patterns and Akka Extras. Outside of the Typesafe stack, we have contributed to Tru-strap, and OpenCV. The team members have also created open source projects that scratch our own itch: we have Reactive Monitor, Specs2 Spring and Scalad.

Get information on how to apply for this position.

October 10, 2014 12:02 PM

October 06, 2014

Gregg Carrier

Monkeypod - API Design and Virtualization

At work, we have just launched a new tool called Monkeypod that allows easy design of REST APIs in a browser. It automatically creates Swagger documentation and a virtualized version of the API serving dummy data instantly. We're finding it very helpful internally, and wanted to open it up to the community for a private beta. Please sign up if you do work with APIs and give us your feedback.

Sign up at http://monkeypod.io

Follow Monkeypod on Twitter at http://twitter.com/monkeypodio

Monkeypod is written using MongoDB, Swagger, Scala, Akka, Casbah, and a lot of the techniques that I have written about in this blog. I'll do a deeper technical architecture dive at some point, but if you'd like to read more about the project, please check out  http://bit.ly/1uHBuoq

Cheers!

by Gregg Carrier (noreply@blogger.com) at October 06, 2014 02:47 PM

October 02, 2014

Functional Jobs

Developer/Operations (Scala) at iGeolise (Full-time)

Searching for developers to join our growing team

iGeolise, Ltd. is a UK company with a development team in Lithuania. We offer our clients TravelTime - a B2B service which allows people to search geo data by time.

We allow our clients to provide an improved service to their customers by adding a time based aspect to their data.

For example, if you are looking for a flat, using a TravelTime enabled service you could find one that is no more than 15 minutes of walking away from your job.

Get more information at www.igeolise.com or see it in action at PropertyWide.co.uk.

Who are we looking for

We are searching for a mid-to-senior level enthusiastic developer that defines learning as a part of who he or she is. We are a small team of professionals and designate creativity, responsibility, ownership and independence as our core values.

Our technology stack is legacy and cruft free - we realize the cons of technical debt. It is also in pristine condition - we follow latest versions of software, have a reliable system architecture with no single point of failure and generally take brains over muscle approach when building our apps.

The way we manage things here is stemmed from the technological foundations of our company. We don't have deadlines - we understand the complexities of developing software and realize that finding the right solution is more valuable in the long run than meeting an arbitrary deadline. There is no such thing as dress code here. We have a flat hierarchy - decisions are reached by discussion and consensus here. IT is not something that does our work for us. IT is us.

In fact we believe that people that enjoy themselves are best motivated - so we have an unrestrained work schedule - you can choose how much you want to work. And we offer an employee share scheme to our employees - because, well, you helped to build it, so why wouldn't you get a part of the financial gains? That only makes sense.

Your role and responsibilities

You would be responsible for the api and devops sections of our platform. Basically anything that is not our core algorithms: server reliability, deployment, load balancing, data parsing and validation, messaging, etc. Your main focus would be on availability, scalability and performance.

Our tech stack is mostly written in Scala, so you either must know it or be willing to learn it. However learning Scala is way faster with all the help that we have here and it is quite an enjoyable experience. Atomic Scala should give you a sneak peak of it if you're not familiar.

Location

We have an office in Kaunas, but are totally fine with you working remotely from wherever you want *. In fact half of our team works this way.

. * our working hours are 10/11-ish to 18/19-ish in GMT+2 (+3 with DST) so you must be able to have at least 4 hours a day when you are online at the same time as we are.

So, you want to apply

Great! Send us something about yourself (a CV, LinkedIn profile or just a short letter) and a solution to this task. Be sure to use your favourite programming language to solve it! ;)

Feel free to write if you have any questions beforehand.

Hope to hear from you soon!

Get information on how to apply for this position.

October 02, 2014 01:33 PM

Senior Scala Software Engineer at TrueAccord (Full-time)

We are a funded early stage start-up founded by senior Google and PayPal people that is looking to add an experienced software engineer familiar with Scala. If you are passionate about Scala - I think you'll enjoy working with us. Experience with Akka and Play! is a big plus.

We are building a product that solves a challenging business problem which involves machine learning, data processing, and behavioral analytics. If you want to join our proven core team and have a huge impact on our technology - we want to talk to you.

Our stack:

  • Scala, Play! and Akka
  • Running on AWS
  • MySQL on RDS
  • Ansible
  • AngularJS
  • Protocol Buffers!

Get information on how to apply for this position.

October 02, 2014 03:04 AM

Senior Software Engineer at TrueAccord (Full-time)

We are a funded early stage start-up that is looking for an experienced software engineer familiar with Scala. Experience with Akka and Play! is a big plus.

We are building a product that solves a challenging business problem which involves machine learning, data processing, and behavioral analytics. If you want to join a strong core team and have a huge impact on our technology we want to talk to you.

Get information on how to apply for this position.

October 02, 2014 03:04 AM

October 01, 2014

Functional Jobs

Senior Software Engineer at McGraw-Hill Education (Full-time)

This Senior Software Engineer position is with the new LearnSmart team at McGraw-Hill Education's new and growing Research & Development center in Boston's Innovation District.

We make software that helps college students study smarter, earn better grades, and retain more knowledge.

The LearnSmart adaptive engine powers the products in our LearnSmart Advantage suite — LearnSmart, SmartBook, LearnSmart Achieve, LearnSmart Prep, and LearnSmart Labs. These products provide a personalized learning path that continuously adapts course content based on a student’s current knowledge and confidence level.

On our team, you'll get to:

  • Move textbooks and learning into the digital era
  • Create software used by millions of students
  • Advance the state of the art in adaptive learning technology
  • Make a real difference in education

Our team's products are built with Flow, a functional language in the ML family. Flow lets us write code once and deliver it to students on multiple platforms and device types. Other languages in our development ecosystem include especially JavaScript, but also C++, SWF (Flash), and Haxe.

If you're interested in functional languages like Scala, Swift, Erlang, Clojure, F#, Lisp, Haskell, and OCaml, then you'll enjoy learning Flow. We don't require that you have previous experience with functional programming, only enthusiasm for learning it. But if you have do some experience with functional languages, so much the better! (On-the-job experience is best, but coursework, personal projects, and open-source contributions count too.)

We require only that you:

  • Have a solid grasp of CS fundamentals (languages, algorithms, and data structures)
  • Be comfortable moving between multiple programming languages
  • Be comfortable with modern software practices: version control (Git), test-driven development, continuous integration, Agile

Get information on how to apply for this position.

October 01, 2014 04:07 PM

September 30, 2014

Rafael de F. Ferreira

Themes from Strangeloop 2014

One of the many reasons making the Strangeloop conference special is the interdisciplinary perspective, taking on themes as diverse as core functional programming concepts - what could fit this description better than the infinite tower of interpreters seen on Nada Amin's keynote - to upcoming software deployment approaches.

Still, some common themes seem to have emerged. One candidate is the spreadsheet as inspiration for programming. One thinker that seems to have taken some inspiration for his work is Jonathan Edwards, that opened the Future of Programming workshop with a talk showcasing the latest version of his research language Subtext.  Earlier prototypes explored the idea of programming without names, directly linking tree nodes to each other via a kind of cut-and-paste mechanism. In its latest incarnation it appears to have evolved into a reactive language with a completely observable evaluation model, the entire evaluation tree is always available for exploration, and a two stage reactive architecture allows for relating input events to evaluation steps. User interface is auto generated, sharing the environment with the code, much like their older reactive cousing, the spreadsheet.

Kaya, a new language created by David Broderick, explores the spreadsheet metaphor in a more literal manner: what if spreadsheets and cells were composable, allowing for naturally nested structures? Moreover, what if we could query this structure in a SQL-like manner? The result is a small set of abstractions generating complex emergent behavior, including, as in Subtext, a generic user interface.

Data dependency graph driven evaluation is an important part of both modern functional reactive programming languages and of all spreadsheet packages since 1978's VisiCalc. We saw some of the first on Evan Czaplicki's highly approachable talk "Controlling Time And Space: Understanding The Many Formulations Of Frp". And a bit of the latter on Felienne Hermans's talk "Spreadsheets for Developers", sort of wrapping around the metaphor and looking to software engineering for inspiration to improve spreadsheet usage.

One of the great aspects of spreadsheets is the experience of direct manipulation of a live environment. This is at the crux of what Brett Victor has been demonstrating on many of his demos, showing how different programming could be if our tools were guided by this principle. Though he did not present, the idea of direct manipulation was present in Strangeloop in several of the talks.  Subtext's transparency and live reflection of code updates on the generated UI moves in this direction. Still on the Future of Programming workshop, Shadershop is an environment whose central concept seems to be directly manipulating real-valued functions by composing simpler functions while live inspecting the resulting plots. Stephen Wolfram's keynote was an entertaining demonstration of his latest product, the Wolfram Language.  Its appeal was due, among other reasons, to the interactive exploration environment, particularly the visual representation of non-textual data and the seamless jump from evaluating expressions to building small exploratory UIs. 

Czaplicki's talk discussed several of the decisions involved in designing Elm, his functional reactive programming language. I found noteworthy that many of those were taken in order to allow live update of running code and an awesome time-traveling debugger.

Taking a different perspective at the buzzword du Jour, reactive, is another candidate theme for this year's Strangeloop: the taming of callbacks. They were repeatedly mentioned as one evil to be banished from the world of programming, including on Joe Armstrong's keynote, "The mess we are in" and all the functional reactive programming content took aim at the construct. Not only functional, another gem from this year's Future of Programming workshop was the imperative reactive programming language Céu. Created by Fransico Sant'anna at PUC Rio - the home of the Lua programming language - Céu compiles an imperative language with embedded event based concurrency constructs down to a deterministic state machine in C.  Achieving, among other tricks, fully automated memory management without a garbage collector.

Befitting our age of microservices and commodity cloud computing, another interesting current was looking at modern approaches to testing distributed systems. Michael Nygard exemplified simulation testing - which can be characterized as property based testing in the large - with Simulant, a clojure framework to prepare, run, record events, make assertions and analyze the results of sophisticated black box tests. Kyle @aphyr Kingsbury delivered another amazing performance torturing distributed databases to their breaking point. Most interesting was the lengths he had to go to in order to control the combinatorial explosion of the state space and actually verify global ordering properties like linearizability.

Speaking of going to great lengths to torture database systems, we come to what might have been my favorite talk at the conference, by the FoundationDb team, "Testing Distributed Systems w/ Deterministic Simulation".  Like @aphyr's Jepsen, they control the external environemnt to inject failures while generating transactions and asserting the system maintains its properties. They take great care to mock out all sources of non-determisim, including time, random number generation, even extend C++ to add better behaved concurrency abstractions.

Tests run thousands of times each night, nondeterministic behavior is weeded out by running twice each set of parameters and checking outputs don't change. FoundationDb's team goes further than Jepsen in the types of failures they can inject; not only causing network partitions and removing entire nodes from the cluster, but also simulating network lag, data corruption, and even operator mistakes, like swapping data files between nodes! Of course the test harness itself could be buggy, failing to exercise certain failure conditions; to overcome this specter, they generate real hardware failures with programmable power supplies connected to physical servers (they report no bugs were found on FoundationDb with this strategy, but Linux and Zookeeper had defects surfaced - the latter isn't in use anymore).

What I particularly enjoyed from this talk was the attitude towards the certainty of failures in production. Building a database is a serious endeavor, data loss is simply not acceptable, and they understood this from the start.

Closing the conference in the perfect key was Carin Meier and Sam Aaron's keynote demonstrating the one true underlying theme: Our Shared Joy of Programming.

by Rafael Ferreira (noreply@blogger.com) at September 30, 2014 06:30 PM

Jesper Nordenberg

Type Classes, Implicit Parameters and Instance Equality

In Scala the concept of implicit parameters can be used to, among other things, emulate Haskell type classes. One of the key differences between the two approaches is that the Haskell compiler statically guarantees that there will always be at most one instance of a type class for a specific type, while in Scala there can be any number of instances which are selected based on some quite complex scope searching rules. This guarantee is sometimes brought up as an advantage for Haskell type classes as you can for example easily write a union function for two sets with the same element types and be certain that both sets use the same ordering instance. In Scala you would typically store the ordering instance in the set type, but since two sets can refer to different instances which have the same static type, the compiler can't statically check if it's safe to call union on the sets. One option is of course add a runtime equality check of the ordering instances, but it would obviously be better to have a static check. In this post I'll describe a solution to get the same static type safety in Scala as in Haskell.

Implicit Instance Construction

The first thing to notice is that there are two basic ways to create an implicit instance in Scala, either as an immutable value or as a function which can take other implicit instances as arguments, for example:

  // Very incomplete Ord type class as example
  case class Ord[T](v: String)

  implicit val intOrd = Ord[Int]("Int")
  
  implicit def listOrd[T](implicit ord: Ord[T]) = Ord[List[T]]("List[" + ord.v + "]")

In this example when searching for an implicit Ord[Int] the Scala compiler will simply use the intOrd instance directly. However when searching for an implicit of type Ord[List[Int]] things gets a bit more complicated. In this case the Scala compiler figures out using the static types that it can create an instance by calling listOrd(intOrd), so it generates this call in the compiled code where the instance is requested. Further recursive calls can be generated as needed, so when searching for an implicit Ord[List[List[Int]] it will generate a call listOrd(listOrd(intOrd)), and so on. 

Note that these function calls are performed every time an implicit instance is needed at runtime, there is no memoization or caching done. So, while for Ord[Int] the same instance will always be used (as it's a constant value), there will be multiple instances for List[Int] created at runtime.

Furthermore, implicit values and functions of the exact same types (but possibly with different implementations as in the example below) can be defined in other modules (or objects as they are called in Scala):

  object o1 {
    implicit val intOrd = Ord[Int]("o1.Int")

    implicit def listOrd[T](implicit ord: Ord[T]) = Ord[List[T]]("o1.List[" + ord.v + "]")
  }

  object o2 {
    implicit val intOrd = Ord[Int]("o2.Int")

    implicit def listOrd[T](implicit ord: Ord[T]) = Ord[List[T]]("o2.List[" + ord.v + "]")
  }

Considering all this it might seem a bit complicated to statically check that two implicit instances are indeed behaviorally equal and interchangeable. But the key insight here is that as long as no side effects are performed inside the implicit functions (and it's my strong recommendation not to perform any!), two separate instances created through the same implicit call chain generated by the Scala compiler will behave identically (except for instance identity checks of course). So, to statically check that two instances are equal (but not necessarily identical) all we need to do is to track the implicit call chain in the type of the implicit instance. So, let's get to it...


Phantom Types to the Rescue

Let's start by adding a phantom type, P to the Ord type:

  case class Ord[P, T](v: String)

This type is only used for static equality checks, e.g. if a: Ord[A, B] and b: Ord[A, B] then a and b are equal and can be used interchangeably (note that with the previous definition, Ord[T], this was not the case). Note that the P type can also be written as a type member in Scala, but doing that gave me problems with the type inference so I won't explore that road in this article.

Now we can easily define a unique type for the implicit values in each module object:

  object o1 {
    implicit val intOrd = Ord[this.type, Int]("o1.Int")
  }

  object o2 {
    implicit val intOrd = Ord[this.type, Int]("o2.Int")
  }

The this.type expression used for the P type parameter is evaluated to the singleton type of the module object containing the expression (i.e. o1.type respectively o2.type in this case). This means that o1.intOrd and o2.intOrd will not have the same static type anymore (Ord[o1.type, Int] vs Ord[o2.type, Int]), which is exactly what we wanted. Note that the use of this.type only works as long as there are just one implicit instance with the same type T in Ord (in this case Int). This is usually not a problem, and otherwise this.type can be replaced with arbitrary unique type defined in the module object.

Things get a bit trickier for implicit functions which have implicit parameters. Here we must construct a phantom type that is a contains both a unique module type and the phantom types of the implicit arguments. We can use a tuple type to accomplish this:

  object o1 {
    implicit val intOrd = Ord[this.type, Int]("o1.Int")

    implicit def ordList[P, T](implicit ord: Ord[P, T]) = Ord[(this.type, P), List[T]]("o1.List[" + ord + "]")
  }

  object o2 {
    implicit val intOrd = Ord[this.type, Int]("o2.Int")

    implicit def ordList[P, T](implicit ord: Ord[P, T]) = Ord[(this.type, P), List[T]]("o2.List[" + ord + "]")
  }

In this example the instance o1.ordList(o1.intOrd) would have type Ord[(o1.type, o1.type), List[Int]], o1.ordList(o1.ordList(o1.intOrd)) would have type Ord[(o1.type, (o1.type, o1.type)), List[List[Int]]] and so on. A combination of implicits from both modules o1.ordList(o2.intOrd) would have type Ord[(o1.type, o2.type), List[Int]].

Set Union Implementation

So, now that we have our phantom types we can quite easily write the type safe set type and the union function. However, there are two possible implementations, either we can get the Ord instance from an implicit parameter to each set function (like union):

  object set1 {
    class Set[P, T]

    def union[P, T](s1: Set[P, T], s2: Set[P, T])(implicit ord: Ord[P, T]) = new Set[P, T]

    // Dummy constructor
    def set[P, T](v: T)(implicit ord: Ord[P, T]) = new Set[P, T]
  }

or we can store it inside the Set object:

  object set2 {
    class Set[P, T](val ord: Ord[P, T])

    def union[P, T](s1: Set[P, T], s2: Set[P, T]) = new Set[P, T](s1.ord)

    // Dummy constructor
    def set[P, T](v: T)(implicit ord: Ord[P, T]) = new Set[P, T](ord)
  }

Either way we can't call union on sets with different P types. The first set implementation saves some memory and works quite well as the Scala compiler is pretty smart about where to look implicit arguments based on the function type arguments.

A Small Test

Here's a small test case to verify that we get a type error when trying to unify two sets with different Ord instances:

  import set1._

  object so1 {
    import o1._
    def listListIntSet() = set(List(List(1)))
  }

  object so2 {
    import o2._
    def listListIntSet() = set(List(List(1)))
  }

  val a1 = so1.listListIntSet()
  val b1 = so1.listListIntSet()
  val a2 = so2.listListIntSet()
  val b2 = so2.listListIntSet()
  union(a1, b1)
  union(a2, b2)
  // Compiler error: union(a1, a2)

Final Words

In this article I've shown that by using Scala's powerful type and module system it's possible to get the same guarantees for instance equality in Scala as you get in Haskell, but with the extra flexibility of being able to create multiple type class instances. IMHO implicit parameters are a strictly more powerful solution to ad hoc polymorphism than type classes, and they can also be used for more purposes. But feel free to prove me wrong. :-)

Interestingly enough using a similar technique of adding a phantom type parameter to for example the Ord a type class in Haskell it should make it possible to create multiple Ord instances for the same type a and still get the same guarantees for the set union function for example. Maybe this idea has been explored already?

by Jesper Nordenberg (noreply@blogger.com) at September 30, 2014 04:18 PM

September 23, 2014

Functional Jobs

Scala Engineer at Localytics (Full-time)

Localytics is hiring Back End engineers to help us expand our app analytics platform. Help us empower developers to create the best possible app experiences for their users.

About you

You should ...

  • Enjoy solving difficult challenges in distributed computing.
  • Have experience with or interest in learning functional programming - languages and techniques.
  • Have a solid understanding of data structures and algorithms.
  • Have experience with or an interest in learning: Amazon Web Services, - Scala, deployment automation and microservice architectures.
  • Be interested in working with advanced storage technologies such as columnar, NoSQL and MPP databases.
  • Have a relentless desire to learn.
  • Want responsibility for taking systems from the whiteboard all the way into production.

About us

We ...

  • Are tackling some of the hardest problems in big data and mobile.
  • Use the best tools for the job and are not tied to legacy frameworks or outdated technologies.
  • Relentlessly automate everything.
  • Understand that taking proper time to design, code and test ultimately results in higher velocity.
  • Invest in personal and professional development with training, mentorship and book clubs.
  • Take our work seriously, but don't take ourselves seriously.

Engineers of all levels are encouraged to apply. Come join one of the fastest growing companies in Boston.

Get information on how to apply for this position.

September 23, 2014 07:03 PM

September 22, 2014

Functional Jobs

"Big Data" Software Engineer at Teralytics Pte Ltd (Full-time)

Teralytics is a fun, growing company with lofty ambitions. At Teralytics, we work with some of the world's most interesting data sets every day. Our work is meaningful and impactful and the products we build have the ability to transform the way governments work and to disrupt industries. We are a team of data scientists, software engineers and business folks from 12 different countries and we have offices in Zurich and Singapore. As a young company, we offer our employees unique learning opportunities and opportunities to take on important responsibilities quickly.

Ideal candidates for the Software Engineer position have extensive experience with designing and building large-scale data processing systems and analytics products. You should enjoy working with a driven team of technical and business people and are able to thrive and succeed in a dynamic, fast-paced and international environment. If you have a GitHub, StackOverflow, or similar profile, we’d love to see it!

Responsibilities

  • Design and build high-performance “Big Data” processing systems

  • Implement statistical and machine learning algorithms efficiently and accurately

  • Work with a multi-disciplinary team to deliver analytics products

Requirements

  • Expert-level proficiency in multiple programming languages, especially functional programming languages

  • Extensive experience with modern software development practices such as test-driven development, continuous integration, etc

  • Built and operated large-scale, high-availability backend systems

  • Solid foundation in Computer Science, such as algorithms and data structures

  • Proficient in Mathematics and English

Strongly Desirable

  • Years of experience in senior, lead, or architect roles

  • Experience with “Big Data” platforms such as Hadoop, Storm, or Spark

  • Good working knowledge of PostgreSQL and PostGIS

  • Proficient in Scala and Python

  • Deep understanding of distributed systems and databases

  • Working knowledge of statistics and machine learning

  • Domain expertise in telco systems or GIS

Get information on how to apply for this position.

September 22, 2014 07:02 AM

September 07, 2014

Adrian King

Code Dependent More

The more I learn about the dependently typed lambda calculus, the cooler it seems. You can use it to describe refinement types (aka subset types), which are about as expressive as a type system can get; you can prove theorems (see chapter 8 of the Idris tutorial); you can even, if the Homotopy Type Theory people have their way, use it to build an entirely new foundation for mathematics.

However, I think dependent types need a bit of a marketing makeover. They have a reputation for being abstruse, to say the least. I mean, have you even read the dependent types chapters in Benjamin Pierce's Types and Programming Languages, long the bible of the aspiring type theorist? No, of course you haven't, because those chapters weren't written (although dependent types do merit a whole section in Chapter 30). The dependently typed calculus is apparently so arcane that even the place you go to find out everything about types won't say much about them.

(Not that Pierce is in any way a slacker; his more recent Software Foundations is all about the use of dependent types in the automated proof assistant Coq.)

Although following all the mathematics to which dependent types lead takes considerable sophistication (a lot more than I've got), and the implementation choices facing a programming language designer who wants to use dependent types are considerable (let's sweep those under a Very Large Rug), a description of what the dependently typed lambda calculus is is not particularly difficult to grasp for anyone who has used a statically typed programming language.

But then, of course, there's the word “dependent”, which pop psychology has not endowed with the most favorable of connotations. (I'd rather be addicted to coding than addicted to codeine, but still.) Surely more programmers would satisfy their curiosity about more expressive type systems if the dependently typed lambda calculus were just called something else (and while we're at it, preferably something shorter).

So What Is It?

If you haven't seen the dependently typed lambda calculus before, then you might want to start at the beginning, with the untyped lambda calculus (which, if you've used any kind of conventional programming language, you already sort of know, even if you don't know you know it). In the concise notation favored by programming language geeks, the grammar for the untyped calculus looks like:

     e  :=      v        a variable
λv.e a function with parameter v and body e
e1 e2 the function e1 applied to the argument e2
built-in value whatever you want: true, 42, etc.

This may look too succinct to make anything out of, but it just means that the untyped lambda calculus is a language that contains variables, functions, function calls, and built-in values. Actually, you can leave out the built-in values (I put them in blue so you can distinguish the pricey add-ons from the base model)—you'll still have a Turing-complete language—but the built-in values (along with parentheses, which you can use to group things) make it feel more like a Lisp dialect, maybe one where curried functions are the norm.

If you're a fan of static types you'd probably prefer a variant of the lambda calculus that has those, and the simplest way to do that is named (surprise!) the simply typed lambda calculus. It has a more complicated grammar than the untyped calculus, because it divides expressions into two categories: terms, which evaluate to a value at runtime, and for which I'll use the letter e; and types, which constrain terms in the usual statically typed way, and which get the letter t:

     e  :=      v        a variable
λvt.e a function with parameter v of type t and body e
e1 e2 the function e1 applied to the argument e2
built-in value whatever you want: true, 42, etc.
 
     t  :=      t1t2        a function type, where t1 is the input type and t2 is the output
built-in type whatever you want: Bool, Int, etc.

You'll notice (because it's bright red) that I added a type annotation (: t) as the only change to the term syntax from the untyped lambda calculus. Types can be either function types or built-in types.

This isn't a very sophisticated type system, and you'd hate trying to write any real code in it. Without more add-ons, we can't get correct types out of it for things like product types (think tuples) or sum types (disjoint unions).

Historically, and in most textbooks, the simply typed lambda calculus is augmented with a bunch of special-purpose gadgetry (like those missing product and sum types), and then followed by several progressively more powerful type systems, like the parametric polymorphism that is at the core of languages like Haskell and Scala.

Boooooring!

Instead, let's just skip to the good bits. The dependently typed lambda calculus is not only more powerful than everything we're skipping over; it's also got a shorter grammar:

     e  :=      v        a variable
λve1.e2 a function with parameter v of type e1 and body e2
e1 e2 the function e1 applied to the argument e2
e1 → e2 a nondependent function type, where e1 is the input type and e2 is the output
(ve1) → e2 a dependent function type, where v, the input, has type e1, and e2 is the output type (which can mention v)
* the type of types
built-in values and types whatever you want: true, 42, Bool, Int, etc.

But whoa, what happened to the types? Unlike the simply typed calculus, there's no t in this grammar! Well, the big thing about the dependently typed lambda calculus (aside from the dependent function types) is that types are themselves values (the OO folks would say they are reified), and type expressions may involve arbitrary computation. In short, types are just terms!

Everyone's Favorite Example

So what can you actually do with the dependently typed lambda calculus? Well, the traditional (practically mandatory) example is one of a list (let's follow convention and call it a Vec) that has its length encoded in its type.

(This is not really the most exciting thing you can do with dependent types. It's actually possible to encode natural numbers in the type system of Scala or Haskell, so you can already implement Vec in either of those languages. But the two languages' type systems are oriented towards doing typical type-level stuff, so numbers-as-types are pretty awkward—it turns out that regular numbers are still the most convenient way of representing numbers.)

You can specify the type of Vec in the dependently typed lambda calculus as:

    Vec: * → Nat → *

where Nat is the type of natural numbers. That is, a Vec is parameterized by the type of its elements (remember that * is the type of types) and the number of its elements—Vec is actually a function that, given those arguments, returns a type.

The constructor for the empty Vec, Nil, has type:

    Nil: (A: *) → Vec A 0

That is, given a type A, Nil gives you back a Vec of that type with zero elements. You'll remember that we said Vec itself is a function that takes two arguments and returns a type—so the type of Nil is indeed a type, even though it looks like a (plain old, value-level) function call. (If this is the first time you're seeing this, I hope that's as much of a rush for you as it was for me.)

It's clearly not fair for me to leave with you with just Vecs of zero elements, so here's the type of Cons, which constructs a nonempty Vec:

    Cons: (A: *) → (n: Nat) → A → (Vec A n) → (Vec A (n + 1))

That is, you feed Cons the element type A and the size n of the old list you are prepending to, along with an A and the old list, and you get back a new list with size n + 1.

Back to the Whinging

All this technical exposition is fine (although I've left out a whole lot of stuff, like how you actually execute programs written in these calculi), but I really came here to complain, not to explain.

When talking about the polysyllabically-named dependently typed lambda calculus, I have to keep using the terms “depedent function type” and “nondependent function type” (hardly anyone says “independent”), which further pushes up the syllable count. By the time I name what I'm talking about, I've forgotten what I was going to say.

I like to think of nondependent function types (say, A → B) as cake types, because the arrow (like frosting) separates two layers (A and B) with distinct, relatively homogeneous structure, like cake layers. By extension, dependent function types (say, (xA → b x)) are pie types: the variable x gets mixed from the left side of the arrow into the right, making the whole type chunky with xs, kind of like the bits of apple in an apple pie. So how about calling the whole calculus the pie calculus?

(Now, it may have occurred to you that you don't really need to have two different function types. Wouldn't a cake type be a degenerate version of a pie type, one that didn't happen to mention its variable on the right side of the arrow? Well, yes, you're right—the cake is a lie, albeit a little white lie. But the upshot is that the pie calculus is even simpler than I've made it out to be, because you can drop the production for the cake type from the grammar.)

As it happens, there are a couple of alternate notations for dependent function types. Instead of:

    (x: A) → B

you can say:

    ∀x: A. B

or:

    Πx: A. B

Hey, that last convention makes it look as if the pie calculus is actually the Π calculus! But no, that would make life too easy. It turns out that the pi calculus is already something else, namely a formalism for describing concurrent computations. It's related to the lambda calculus, so no end of confusion would ensue if you tried to hijack the name.

So I guess I'll have to keep calling my favorite calculus by its mind-numbingly long name. Life is unfair, etc., etc.

And So Can You!

You can mitigate some of life's unfairness by mastering the dependently typed lambda calculus yourself. Check out Coq (the aforementioned Coq-based Software Foundations is probably the easiest way to get into dependent types), or Idris, or Agda, or even F* (or, for a quite different take on what it means to implement the calculus, Sage), and start amazing your friends with your new superpowers!

by Archontophoenix (noreply@blogger.com) at September 07, 2014 03:58 AM

August 28, 2014

Functional Jobs

Senior Software Engineer (Functional) at McGraw-Hill Education (Full-time)

This Senior Software Engineer position is with the new LearnSmart team at McGraw-Hill Education's new and growing Research & Development center in Boston's Innovation District. We make software that helps college students study smarter, earn better grades, and retain more knowledge.

The LearnSmart adaptive engine powers the products in our LearnSmart Advantage suite — LearnSmart, SmartBook, LearnSmart Achieve, LearnSmart Prep, and LearnSmart Labs. These products provide a personalized learning path that continuously adapts course content based on a student’s current knowledge and confidence level.

On our team, you'll get to:

  • Move textbooks and learning into the digital era
  • Create software used by millions of students
  • Advance the state of the art in adaptive learning technology
  • Make a real difference in education

Our team's products are built with Flow, a functional language in the ML family. Flow lets us write code once and deliver it to students on multiple platforms and device types. Other languages in our development ecosystem include especially JavaScript, but also C++, SWF (Flash), and Haxe.

If you're interested in functional languages like Scala, Swift, Erlang, Clojure, F#, Lisp, Haskell, and OCaml, then you'll enjoy learning Flow. We don't require that you have previous experience with functional programming, only enthusiasm for learning it. But if you do have some experience with functional languages, so much the better! (On-the-job experience is best, but coursework, personal projects, and open-source contributions count too.)

We require only that you:

  • Have a solid grasp of CS fundamentals (languages, algorithms, and data structures)
  • Be comfortable moving between multiple programming languages
  • Be comfortable with modern software practices: version control (Git), test-driven development, continuous integration, Agile

Get information on how to apply for this position.

August 28, 2014 09:18 PM

August 25, 2014

Francois Armand

Upgrading to Ubuntu 14.04: "Error: Timeout was reached", "Segmentation fault (core dumped)" with APT and other stranges errors


This last weed-end, I tried to upgrade my old Bodhi Linux 2.4 based on Ubuntu 12.04 LTS to Bodhi 3.0 based on Ubuntu 14.04 LTS.


At some point something goes wrong with the update of the 1500 or so packages. I'm not sure about what happenned, but I started to see stranges errors, especially when trying to install/remove/correct package, but also in other part, for example when running some Python3 scripts.

Finally, everything was looking like working, but just almost working. Google fails on me on that, and I spend quite some time this week-end trying to go to the bottom of that problem, so I let a testimonial here, if it can help other people.

Symptoms

Segmentation fault (core dumped) with apt-get and python3 related scripts


This appears especially when trying to install/install -f "update-notifier-common" software, for example (sorry for the French):

% sudo apt-get install -f
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances      
Lecture des informations d'état... Fait
0 mis à jour, 0 nouvellement installés, 0 à enlever et 0 non mis à jour.
1 partiellement installés ou enlevés.
Après cette opération, 0 o d'espace disque supplémentaires seront utilisés.
Paramétrage de update-notifier-common (0.154.1) ...
Segmentation fault (core dumped)
dpkg: error processing package update-notifier-common (--configure):
 le sous-processus script post-installation installé a retourné une erreur de sortie d'état 139
Des erreurs ont été rencontrées pendant l'exécution :
 update-notifier-common
Error: Timeout was reached
E: Sub-process /usr/bin/dpkg returned an error code (1)



The other aspect of that problem was that any Python3 related errors lead to a stack trace like that:

% python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> foo                                                                                                                                                                                                     
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
    from apport.fileutils import likely_packaged, get_recent_crashes
  File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
    from apport.report import Report
  File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in <module>
    import apport.fileutils
  File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in <module>
    from apport.packaging_impl import impl as packaging
  File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 20, in <module>
    import apt
  File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in <module>
    import apt_pkg
ImportError: /usr/lib/python3/dist-packages/apt_pkg.cpython-34m-x86_64-linux-gnu.so: undefined symbol: _ZN13pkgTagSectionC1Ev

Original exception was:
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined</module></stdin></module></module></module></module></module></module></stdin>



Error: Timeout was reached

The second main symptom was that when using apt-* tools, I was getting an error message "Error: Timeout was reached" just at the end of the process. Nonetheless, everything seemed to be correctly installed/removed/etc.


Bug tracking, short version


That led me to some bug about a timeout to short for packagekit:

/usr/bin/test -e /usr/share/dbus-1/system-services/org.freedesktop.PackageKit.service && /usr/bin/test -S /var/run/dbus/system_bus_socket && /usr/bin/gdbus call --system --dest org.freedesktop.PackageKit --object-path /org/freedesktop/PackageKit --timeout 4 --method org.freedesktop.PackageKit.StateHasChanged cache-update

So I tested with a really big number, and got:

/usr/bin/gdbus call --system --dest org.freedesktop.PackageKit --object-path /org/freedesktop/PackageKit --timeout 300 --method org.freedesktop.PackageKit.StateHasChanged cache-update

Error: GDBus.Error:org.freedesktop.DBus.Error.TimedOut: Activation of org.freedesktop.PackageKit timed out


What lead to testing packagkit itself:

% sudo /usr/lib/packagekit/packagekitd --verbose
11:12:57        PackageKit          Verbose debugging enabled (on console 1)
11:12:57        PackageKit          keep_environment: 0
11:12:57        PackageKit          using config file '/etc/PackageKit/PackageKit.conf'
11:12:57        PackageKit          syslog fucntionality disabled
11:12:57        PackageKit          Log all transactions: 1
11:12:57        PackageKit          daemon shutdown set to 300 seconds
11:12:57        PackageKit          clearing download cache at /var/cache/PackageKit/downloads
11:12:57        PackageKit          destination eth0 is valid
11:12:57        PackageKit          setting config file watch on /etc/PackageKit/PackageKit.conf
11:12:57        PackageKit          ProxyHTTP read error: Key file does not have key 'ProxyHTTP'
11:12:57        PackageKit          searching for plugins in /usr/lib/x86_64-linux-gnu/packagekit-plugins
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-systemd-updates.so: A plugin to write the prepared-updates file
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-clear-firmware-requests.so: Clears firmware requests
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-check-shared-libraries-in-use.so: checks for any shared libraries in use after a security update
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-scan-desktop-files.so: Scans desktop files on refresh and adds them to a database
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-update-check-processes.so: Checks for running processes during update for session restarts
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-no-update-process.so: Updates the package lists after refresh
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-update-package-cache.so: Maintains a database of all packages for fast read-only access to package information
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin_scripts.so: Runs external scrips
11:12:57        PackageKit          opened plugin /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-require-restart.so: A dummy plugin that doesn't do anything
11:12:57        PackageKit          run pk_plugin_initialize on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-check-shared-libraries-in-use.so
11:12:57        PackageKit          finished pk_plugin_initialize
11:12:57        PackageKit          run pk_plugin_initialize on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-scan-desktop-files.so
11:12:57        PackageKit          finished pk_plugin_initialize
11:12:57        PackageKit          run pk_plugin_initialize on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-update-check-processes.so
11:12:57        PackageKit          finished pk_plugin_initialize
11:12:57        PackageKit          run pk_plugin_initialize on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-no-update-process.so
11:12:57        PackageKit          finished pk_plugin_initialize
11:12:57        PackageKit          run pk_plugin_initialize on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-update-package-cache.so
11:12:57        PackageKit          finished pk_plugin_initialize
11:12:57        PackageKit          run pk_plugin_initialize on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-require-restart.so
11:12:57        PackageKit          finished pk_plugin_initialize
11:12:57        PackageKit          Trying to load : aptcc
11:12:57        PackageKit          dlopening '/usr/lib/x86_64-linux-gnu/packagekit-backend/libpk_backend_aptcc.so'
Failed to load the backend: opening module aptcc failed : /usr/lib/x86_64-linux-gnu/packagekit-backend/libpk_backend_aptcc.so: undefined symbol: _ZN13pkgTagSectionC1Ev11:12:57 PackageKit          run pk_plugin_destroy on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-check-shared-libraries-in-use.so

11:12:57        PackageKit          finished pk_plugin_destroy
11:12:57        PackageKit          run pk_plugin_destroy on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-scan-desktop-files.so
11:12:57        PackageKit          finished pk_plugin_destroy
11:12:57        PackageKit          run pk_plugin_destroy on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-update-check-processes.so
11:12:57        PackageKit          finished pk_plugin_destroy
11:12:57        PackageKit          run pk_plugin_destroy on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-no-update-process.so
11:12:57        PackageKit          finished pk_plugin_destroy
11:12:57        PackageKit          run pk_plugin_destroy on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-update-package-cache.so
11:12:57        PackageKit          finished pk_plugin_destroy
11:12:57        PackageKit          run pk_plugin_destroy on /usr/lib/x86_64-linux-gnu/packagekit-plugins/libpk_plugin-require-restart.so
11:12:57        PackageKit          finished pk_plugin_destroy
11:12:57        PackageKit          already closed (nonfatal)
11:12:57        PackageKit          parent_class->finalize


HooooOOoo... What about Aptitude ?


% aptitude
aptitude: symbol lookup error: aptitude: undefined symbol: _ZN3APT11CacheFilter39PackageArchitectureMatchesSpecificationC1ERKSsb



Solution

The problem was that somehow, I get a bad version of libapt-pkg4.12. So anything related to apt in some way had a chance to end badly.

Reinstalling the correct version of them (the one provided with my distribution) corrected all the problems:

sudo apt-get install libapt-pkg4.12=1.1.1bodhi1 apt=1.1.1bodhi1 apt-transport-https=1.1.1bodhi1 apt-utils=1.1.1bodhi1


Hope it helps !


by Fanf (noreply@blogger.com) at August 25, 2014 03:19 PM

August 11, 2014

Functional Jobs

Big Data Engineer / Data Scientist at Recruit IT (Full-time)

  • Are you a Big Data Engineer who wants to work on innovative cloud and real-time data analytic technologies?
  • Do you have a passion for turning data into meaningful information?
  • Does working on a world-class big data project excite you?

Our client is currently looking in growth phase and looking for passionate and creative Data Scientists who can design, development, and implement robust, and scalable big data solutions. This is a role where you will need to enjoy being on the cusp of emerging technologies, and have a genuine interest in breaking new ground.

Your skills and experience will cover the majority of the following:

  • Experience working across real-time data analytics, machine learning, and big data solutions
  • Experience working with large data sets and cloud clusters
  • Experience with various NoSQL technologies and Big Data platforms including; Hadoop, Cassandra, HBASE, Accumulo, and MapReduce
  • Experience with various functional programming languages including; Scala, R, Clojure, Erlang, F#, Caml, Haskell, Common Lisp, or Scheme

This is an excellent opportunity for someone who is interested in a change in lifestyle, and where you would be joining other similar experienced professionals!

New Zealand awaits!

Get information on how to apply for this position.

August 11, 2014 12:32 AM

July 28, 2014

Gregg Carrier

Mobile Enterprise Integration with Scala, MongoDB and Swagger

Check out my post on Enterprise Integration and some of the tools we have built for it using MongoDB, Scala and Swagger:

Enterprise Integration for Mobile Applications

by Gregg Carrier (noreply@blogger.com) at July 28, 2014 04:52 PM

July 27, 2014

scala-lang.org

Scala: Next Steps

As with every living programming language, Scala will continue to evolve. This document describes where the core Scala team sees the language going in the medium term and where we plan to invest our efforts.

In a nutshell, our main goals are to make the language and its libraries simpler to understand, more robust, and better performing. The features described in this document span the next three major releases of the Scala distribution. Naturally, the planning for later releases is more tentative and fluid than for earlier ones.

Scala 2.12

Scala 2.12’s main theme is Java 8 interoperability. It will support Java 8 lambdas and streams and will allow easy cross calls with these features in both directions. We recently published a detailed feature list and roadmap for this release.

We have not yet decided on version numbers for the releases beyond 2.12, so for the time being we will use opera names as designators.

Scala “Aida”

This release focuses on improving the standard library.

  1. Cleanups and simplification of the collections library: we plan to reduce the size of the collections library, providing some functionality as separate modules. Generally, we want to make them even easier to use and structure them so that they are more amenable to optimizations. Where needed, breaking changes will be announced using deprecation in Scala 2.12; regular use of the collections will likely be unaffected, but custom collections may need to be adapted to the simplified hierarchy.

    1. Reduce reliance on inheritance
    2. Make all default collections immutable (e.g. scala.Seq will be an alias of scala.immutable.Seq)
    3. Other small cleanups that are possible with a rewriting step (e.g. rename mapValues)
  2. Added functionality: We’d like to introduce several new modules, including a couple of spin-offs from the collections library.

    1. Lazy collections through improved views, including Java 8 streams interop.
    2. Parallel collections with performance improvements obtained from operation fusion and more efficient parallel scheduling.
    3. An integrated abstraction to handle validation.
  3. The (independent) scala.meta project aims to establish a new standard for reflection and macro programming. It will be considered for integration in the standard library once it is mature and stable.

  4. As in every Scala release, we’ll also work on improving compiler performance. Since this release focuses on the library, compiler changes will be strictly internal.

Backwards compatibility and migration strategy: The changes to collections might require source code to be rewritten, even though this should be rare. However, we aim to maintain source code compatibility modulo an automatic migration tool (analogous to go fix for Go) that can do the rewriting automatically. Ideally, that tool should be robust and expressive enough to support cross-building.

Prototypes of the new collection functionality and meta-programming libraries will be made available as separate libraries in the Scala 2.12 timeframe, so that projects can experiment with the new features early.

Scala “Don Giovanni”

The main focus for this release is the Scala programming language and its compiler. The new version should provide clear improvements in simplicity, usability and stability, while at the same time staying backwards compatible with current usage of the language.

Areas that will be investigated include the following:

  1. Cleaned-up syntax: The objective is to more clearly expose Scala’s principle of having few orthogonally composable features.

    1. Trait parameters instead of early definition syntax
    2. XML string interpolation instead of XML literals
    3. Procedure syntax is dropped in favor of always defining functions with =
    4. Simplified and unified type syntax for all forms of information elision: existential types and partial type applications are both expressed with _, forSome syntax is eliminated.
  2. Removing puzzlers: There are some features in Scala which are known to be prone to puzzlers, and which can be made safer by tweaking the language. In particular, the following changes would help:

    1. Result types are mandatory for implicit definitions.
    2. Inherited explicit result types take precedence over locally-inferred ones.
    3. Universal toString conversion and concatenation via + should require explicit enabling.
    4. Avoid surprising behavior of auto-tupling.
  3. Simple foundations: This continues the strive for simplicity on the type systems side. We will identify formerly disparate features as specific instances of a small set of concepts. This will help in understanding the individual features and how they hang together. It will also reduce unwanted feature interactions. In particular:

    1. A single fundamental concept - type members - can give a precise meaning to generics, existential types, wildcards, and higher-kinded types.
    2. Intersection and union types make member selection more regular and avoid blow-ups when computing tight upper and lower bounds of sets of types.
    3. Tuples can be decomposed recursively, overcoming current limits to tuple size, and leading to simpler, streamlined native support for abstractions like HLists or HMaps which are currently implemented in some form or other in various libraries.
    4. The type system will have theoretical foundations that are given by a minimal core calculus (DOT).
  4. Better tooling: We will continue to focus on the tooling side, with the goals to improve batch compiler speed and to make the compiler more adapted to fast incremental compilation and IDE presentation support.

  5. Faster code: We plan to improve performance of generated code using optimizations including:

    1. Robust specialization using Miniboxing techniques, applied to collections (a preview of this may already be available in Aida).
    2. Improvements to value classes: Can be array elements, can play part in specializations, can be multi-field.
    3. Optimized implementation of thread-local lazy vals.

We will collaborate here with the Java effort in project Valhalla, which has similar goals.

Backwards compatibility

Since some features are superseded by others, some source code will have to be rewritten. However, using the migration tool described earlier, common Scala code should port automatically. In particular, we aim that all features described in the latest edition of “Programming in Scala” can be ported automatically. However, the porting guarantee will not extend to features that are labelled “experimental”. For some of these (e.g. macros and reflection), we aim to have a replacement that can fulfill analogous functionality, but using different notation and APIs.

Resourcing

Currently, having a feature on the list does not mean that we have already committed the resources to work on this. The roadmap is intended as a framework for the development of future Scala versions. We are happy to take contributions that implement parts of it that are lower on our priority list. Before starting work on a feature not listed here, it must first be accepted for inclusion in the roadmap.

July 27, 2014 10:00 PM

July 23, 2014

scala-lang.org

Scala 2.11.2 is now available!

We are very pleased to announce the release of Scala 2.11.2!

Scala 2.11.2 is a bugfix release that is binary compatible with previous releases in the Scala 2.11 series. The changes include:

  • Several issues in the collections library were resolved, most notably equality on ranges (SI-8738).
  • The optimizer no longer eliminates division instructions that may throw an ArithmeticException (SI-7607).
  • The -Xlint compiler flag is now parameterized by individual warnings. This is intended to replace the -Ywarn-... options, for instance, -Xlint:nullary-unit is equivalent to -Ywarn-nullary-unit. Run scalac -Xlint:help to see all available options. Kudos to @som-snytt!
  • TypeTags and Exprs are now serializable (SI-5919).

Compared to 2.11.1, this release resolves 49 issues. We reviewed and merged 70 pull requests.

The next minor Scala 2.11 release will be available in 2 months, or sooner if prompted by a serious issue.

Available Libraries and Frameworks

A large number of Scala projects have been released against Scala 2.11. Please refer to the list of libraries and frameworks available for Scala 2.11.

A release of the Scala IDE that includes Scala 2.11.2 will be available shortly on their download site.

Release Notes for the Scala 2.11 Series

The release notes for the Scala 2.11 series, which also apply to the current minor release, are available in the release notes for Scala 2.11.1. They contain important information such as:

  • The specification of binary compatibility between minor releases.
  • Details on new features, important changes and deprecations in Scala 2.11.

Contributors

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and submitting and reviewing pull requests! You are all awesome.

According to git shortlog -sn --no-merges v2.11.1..v2.11.2, 21 people contributed code to this minor release: Jason Zaugg, A. P. Marki, Lukas Rytz, Adriaan Moors, Rex Kerr, Eugene Burmako, Antoine Gourlay, Tobias Roeser, Denys Shabalin, Philipp Haller, Chris Hodapp, Todd Vierling, Vladimir Nikolaev, François Garillot, Jean-Remi Desjardins, Johannes Rudolph, Marcin Kubala, Martin Odersky, Paolo Giarrusso, Rui Gonçalves, Stephen Compall.

July 23, 2014 10:00 PM

July 21, 2014

Tomás Lázaro

Scala for Java Developers

A few weeks ago I was asked to review a book called "Scala for Java Developers". Being preparing some Scala lessons to share at work it made lots of sense to agree. Finding a good way to introduce Java and .NET developers to Scala is hard. If someone is interested in learning a new language it's easy, just give them the Programming in Scala book. The challenge I'm up against is making a case for actually spending time learning it at all. What caught my eye a few years ago was all the expressiveness and succinctness I could achieve by making the leap. My expectations were to find a book that explained how to translate all the Java knowledge into Scala while learning the interesting new features one could discover making such a transition. The books was not exactly what I expected but that is not a bad thing.

The first chapter introduces the reader to the Activator and drops them into the REPL. It shows good examples of how easy it is to declare classes, play around with collections and express basic stuff. That is what I expected to find but then it picks up speed. It continues on the next chapter by creating a Java web service and exemplifying how to migrate it to Scala piece by piece. Then it goes on to IDEs, SBT, mentions many useful SBT plugins, worksheets and even a writing a REST client. Followed by a tour of all the testing frameworks available on Scala land. Moving to Play Framework and then some debugging.

Halfway through the book most of what regular developer faces on a day to day basis has been touched upon. Scala syntax is explained almost by accident, just as a necessity to show what can be done. It doesn't teach the language but rather provides examples of interesting and most likely new and simpler ways to do stuff a Java developer is used to. The rest of the book continues on at full speed with databases, webservices, xml, json, CRUDs and concurrency. I was surprised it even tackles Iteratees to illustrate reactive web applications like chat rooms. It even mentions Scala.js!

Overall it shows many libraries and tools used in the Scala ecosystem. It takes on many of the common problems most people are trying to solve at work. I think it can appeal to Spring, JavaEE or Rails users and similar. It provides a bird's eye view of the frameworks available while forcing the reader to actually try stuff out immediately in the REPL. Readers will get a feeling for what Scala enables and will surely go out looking for more.

My conclusion is if you are already into Scala and learning on your own or with other books then it is not for you. I recommend the book as a teaser for programmers who don't know Scala at all or barely and want to discover what it is about skipping to the fun part. Also, for someone like me who is trying to get other people interested it provides many ideas and examples that are simple to setup and play with. Having more Scala resources is good and this book fills a spot that needed to be filled.

by Tomás Lázaro (noreply@blogger.com) at July 21, 2014 02:59 PM

July 09, 2014

Mirko Socker

Play 2.3 Applications on OpenShift

This is a quick how-to guide to get your Play 2.3 applications up and running on RedHat’s OpenShift PaaS. OpenShift unfortunately doesn’t support Play out of the box, and there are some pitfalls that can be quite annoying.

Why OpenShift?

As I said, OpenShift – contrary to many other PaaS providers like CloudBees or Heroku – does not support Play directly, but you can use a third-party “cartridge” (that’s what they call the set of scripts and the initial template) to run Play. So why OpenShift? For a pet project, that I wanted to run as cheaply as possible, I needed a database with more than the usual 5MB / 10k rows you get with most free PaaS offerings. On OpenShift, you simply get 1GB of total storage, including your database.

Getting Started

First you need to create a new application from scratch and choose an URL for your application. Fortunately, we don’t actually have to start from scratch but can  use Michel Daviot‘s  OpenShift Play Framework Cartridge. Leave the Source Code field empty, but set the Cartridge to the following URL: http://cartreflect-claytondev.rhcloud.com/reflect?github=tyrcho/openshift-cartridge-play2.

openshift-create-new-application

Let OpenShift create the application and take a break. Seriously, it will take up to ten minutes to create the application. And even if the application overview page tells you that the application is ready, it might still need some additional time until it’s actually up and running. The primary reason is most likely that the underlying VMs just aren’t that powerful and don’t have that much RAM (512MB), and also that SBT initially needs to download Play and all its dependencies.

openshift-application-overview

In the end, you should be able to access your new application:

openshift-welcome-to-play

That’s wasn’t too hard, but the application isn’t very useful yet. In the next step, we are going to replace the Hello World template with an actual Play application.

Replacing the Template Application

Now let’s replace the template with an actual application. We’re going to run the Activator Play Slick Demo, so we need to first clone that repository. Then we add the OpenShift repository as a new remote (you will  find the URL in the application overview page).

git clone https://github.com/loicdescotte/activator-play-slick.git
git remote add openshift ssh://XXX@playslick-mstocker.rhcloud.com/~/git/playslick.git/
git push -f openshift HEAD

When you push the new code, you’ll see OpenShift compile your code:

Counting objects: 46, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (41/41), done.
Writing objects: 100% (46/46), 8.09 KiB | 0 bytes/s, done.
Total 46 (delta 15), reused 0 (delta 0)
remote: Building git ref 'master', commit 4e8c09a
remote:
remote: [info] Loading project definition from /var/lib/openshift/53bcfe2c500446884400021a/app-root/runtime/repo/project
remote: [info] Set current project to activator-play-slick (in build file:/var/lib/openshift/53bcfe2c500446884400021a/app-root/runtime/repo/)
...
remote: [success] Total time: 162 s, completed Jul 9, 2014 8:29:16 AM
remote: Preparing build for deployment
remote: Deployment id is 9a7ca0fe
remote: Activating deployment
remote:
remote:
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://53bcfe2c500446884400021a@playslick-mstocker.rhcloud.com/~/git/playslick.git/
 + a6011c5...4e8c09a HEAD -> master (forced update)

Building the application succeeded, but it won’t work:

openshift-unavailable

Fortunately, we can log into the VM. The exact ssh command is listed on the application overview, beneath the clone URL. The most interesting file on the VM is probably the log file, which you can find at play2/logs/play.log. The play2 directory also contains the scripts and hooks. To figure out how OpenShift will start and stop your application, take a look at play2/bin/control.

[playslick-mstocker.rhcloud.com]\> ls play2/
bin  env  hooks  logs  metadata  README.md  template

[playslick-mstocker.rhcloud.com]\> ls play2/logs
play.log

[playslick-mstocker.rhcloud.com]\> ls play2/bin
control  install  setup

This gives a clue why the application doesn’t work:

[playslick-mstocker.rhcloud.com]\> less play2/logs/play.log
nohup: failed to run command `target/universal/stage/bin/play': No such file or directory

[playslick-mstocker.rhcloud.com]\> head play2/bin/control
#!/bin/bash -e

function start {
    cd $OPENSHIFT_REPO_DIR
    # build on first time
    [ -d target ] || build
    rm -f target/universal/stage/RUNNING_PID
    nohup target/universal/stage/bin/play -Duser.home=${OPENSHIFT_DATA_DIR} -Dhttp.port=8080 -Dhttp.address=${OPENSHIFT_PLAY2_IP} -DapplyEvolutions.default=true -Dconfig.resource=openshift.conf -mem 512 > $OPENSHIFT_PLAY2_LOG_DIR/play.log 2>&1 &
}

OpenShift (or rather, the cartridge we used) expects that the project is called play, but ours is activator-play-slick. We can either modify the control script or rename your project; I simply renamed the project name in the build.sbt. We can also see that the control script uses the openshift.conf file for the configuration, which doesn’t exist yet. So let’s create the conf/openshift.conf file in your repository and have it include the main configuration, so that we can overwrite OpenShift specific parts later on:

include "application"

There’s one last thing we need to do: the application uses play-slick to auto-generate the evolutions, but this doesn’t seem to be working when we run the staged application. To generate the evolution, we can simply run sbt test (if you know a direct way to run the play-slick DDL plug-in, please let me know). Now commit the generated conf/evolutions folder, the openshift.conf file and the changes to the build file to the repository. Push the changes to OpenShift, take another coffee and now we should see the application.

openshift-kitty

Using MySql instead of H2

As a final step, let us replace the default H2 database with MySQL. On the application overview page, add the MySQL Cartridge to your application. This will again take some time, but in the end it will give you the connection details. You don’t have to write this down because it will be provided to your application through environment variables. Add the following configuration to your openshift.conf:

db.default.slick.driver=scala.slick.driver.MySQLDriver
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://"${OPENSHIFT_MYSQL_DB_HOST}":"${OPENSHIFT_MYSQL_DB_PORT}"/"${OPENSHIFT_APP_NAME}
db.default.user=${OPENSHIFT_MYSQL_DB_USERNAME}
db.default.password=${OPENSHIFT_MYSQL_DB_PASSWORD}

Also, don’t forget to add the MySQL driver to your build.sbt:

libraryDependencies ++= Seq(
  "org.webjars" %% "webjars-play" % "2.2.2",
  "com.typesafe.play" %% "play-slick" % "0.7.0-M1",
  "mysql" % "mysql-connector-java" % "5.1.30")

Now it gets a bit ugly: the H2 database definition is not compatible with MySQL, and I haven’t figured out how to just run the play-slick DDL generation, so we just adapt the schema by hand:

create table CAT (name TEXT NOT NULL, color TEXT NOT NULL);

Commit and push those changes, wait a moment and now your data will be stored in MySQL!

I hope this guide was useful, and I’d be happy to hear your experiences in running Play applications on OpenShift.

<g:plusone href="http://misto.ch/play-on-openshift/"></g:plusone>

by Mirko Stocker at July 09, 2014 04:58 PM

July 06, 2014

Jim McBeath

From Obvious To Agile

What do you do when obvious isn't?

Installing new fence posts

Many years ago I had a fence that needed to be repaired. I got a recommendation for a fence repair man from a friend and had him come out to take a look. He said the panels between the posts were fine and did not need to be replaced, I just needed new posts. He quoted me a price for installing new fence posts that seemed quite reasonable, and I accepted his bid.

A few days later he came back to do the job. After he had been out there working for a while, I went out to take a look. I was surprised when I saw how he had installed the new fence posts. He had not removed the old posts and put new posts in their places, as I had assumed; instead, he simply planted a new post next to each old post and strapped them together. I was flabbergasted, and complained to him that my expectation was that he was going to take out the old posts and replace them with new posts. He was nonplussed. "I told you I would install new posts," he said. "Taking out the old posts would be way more work, and I would have to charge you more."

Well, he had me: he had indeed said only that he would install new posts. I was the one who assumed he would take out the old posts. I grumbled, paid him extra to replace a few of the old posts where it was particularly troublesome to have an extra post sticking out, and had the whole fence replaced the right way a few years later.

Keep using gmail

One of the startups at which I worked used gmail and was acquired by a large company that used Exchange. Concerned about the possibility of having to move to what we felt was a worse system, we asked what would happen with email. We were relieved when they said we could keep using gmail.

On the very first day that we were officially part of the new company, we were all told that we now had Exchange email accounts. "Hey!," we said, "you told us we could keep our gmail accounts." "Yes, you can," came the response, "but you also need to have an Exchange account for all official company email."

This was, of course, not what we had expected when we asked if we could keep our gmail accounts. But, as with the new fence posts, they had in fact kept their word and let us keep our gmail accounts; it was we who assumed that that would continue to be our only email account.

Everything under SCCS

At one of the places I worked, we hired a contractor to work on a subsystem. At one point we became concerned about how he was managing his source code, so we asked how he was doing that. "Everything is under sccs," he said. (This was well before the days of git, subversion, cvs, or even rcs; at the time, sccs (Source Code Control System) was what most people in our industry were using.) When he finally delivered the source code to us, we were annoyed to discover that he simply had a directory named "sccs", and all of his source code was contained in that directory; there was in fact no versioning or history.

Once again, this was not what we had expected. When he said "sccs" we assumed he was talking about the source code control system, when in fact he was just referring to a directory name; and when he said "under" we assumed he meant "managed by", when in fact he just meant "contained in."

A new and improved version of Android

My first smart phone was an Android phone running version 2.2. I watched as the newer versions of Android came out, filled with interesting new features. Finally, an over-the-air update was available for my phone. I eagerly updated and started playing with the new features. My first disappointment was with the new and definitely not improved performance: my phone was slow and laggy, and it no longer lasted even one day on a full charge.

I was even more dismayed to discover that they had removed USB Mass Storage Mode (MSC or UMS) and replaced it with a significantly less functional alternative, MTP (Media Transfer Protocol). In my case, it was completely non-functional for my use, because my home desktop machine was running Linux, and at the time there was not a working Linux driver for MTP mode.

I was, as you might expect, pretty ticked off. I had assumed without thinking about it that they would not remove a significant feature from a new version of the software, but they never said that.

Alternate Interpretations

Ask yourself: when reading the above anecdotes, did you realize in advance of the denouement what the problem would be for all of them? If it had been you, would you have made the same assumptions as I did?

Sometimes something seems so obvious to us that it does not even cross our minds that there might be an alternate interpretation.

I don't think it is possible for us to see these alternative interpretations in every case; often it is something with which we have had no experience, so could not be expected to know. We do, of course, sometimes consider alternative interpretations. In the future, if someone tells me they will install new fence posts, I will be sure to ask for more details. But we have to make assumptions as we deal with the world every day. If we examined every statement and every experience for alternative interpretations, that would consume all of our time, and we would not have any time left to pursue new thoughts. We learn to make instant and unconscious judgment calls: as long as what we hear and see has a high enough probability of an unambiguous interpretation, the possibility that there is an alternate interpretation does not bubble up to our conscious minds. Overall this is a very effective strategy that lets us focus our mental energies on situations where an unusual outcome is more likely. But this does mean that every once in a while we will miss something, with undesired results.

Going beyond obvious

I have already given my recommendation to State The Obvious. However, as you can see from the above anecdotes, this is not always enough. But what else can we do?

If you consider the anecdotes above, you might notice that, in most of them, by the time I realized that I had made an incorrect assumption, the deed was done and I was stuck with an undesired result. But the fence post story was a little different: in that case, I checked up on the work before it was done. Because I discovered the problem while it was happening, I was able to ask for changes and get a result that was closer to what I wanted.

Software Development

Not all of my blog posts are about software development, but in this case the application is obvious. Well, it seems obvious to me, but just in case it is not obvious to everyone, I will follow my own advice and explain in detail.

In the traditional waterfall process, a complete and detailed specification of the desired system is created before doing any of the implementation work. Once that spec is done, the system is built to match it. But, as we have seen from the anecdotes above, even a very simple spec, such as "install new fence posts", might be interpreted in a bizarre way that still matches the letter of the specification. In this case, the result might be something that arguably matches what was specified, but is not what was wanted.

Based on my personal experience and anecdotes I have heard from others, I believe that it is very difficult to write a good spec for something new, and impossible to write a spec that can not be interpreted by somebody in some bizarre way that satisfies the spec but is not the desired result.

Given that we can't guarantee that we can write a spec that will not be misinterpreted, what is the alternative? I think the only alternative is to do what I did in the fence-post case: check up on the work and make corrections along the way. This is embodied in a couple of the value statements in The Agile Manifesto: "Customer collaboration over contract negotiation" and "Responding to change over following a plan".

If you are asking someone to create something that is very similar to things that have been created before, and through previous common experience there is already a shared vocabulary sufficient to describe how the desired result compares to those previous creations, then you can perhaps write a spec that will get you what you want. The closer the new thing is to those previously created things, the easier that will be. But in software development, where the goal is often specifically to create something novel, this is particularly difficult. In that situation, I think that creating and then relying solely on a detailed spec is less likely to result in a satisfactory outcome; I believe an agreement on direction and major points, followed by keeping a close eye on progress, paying particular attention when something is being done for the first time, is the key to good results.

Writing a Spec

I'm not saying don't write a spec. I'm saying you need to recognize that a spec won't take you all the way, and a poorly written spec can hinder your progress. Writing a spec is like looking at a map and planning your route: often necessary but seldom sufficient. You need to be prepared for construction closures, blocking accidents, or even additional interesting sights you might decide to see along the way. For any of these diversions, you will need to reexamine your route in the middle of the trip and select an alternative. For a short trip, you might not run into any such problems and thus not need to modify your route, but the longer the journey the more likely that at some point you will need or want to deviate from your original route.

If you are familiar with the roads and have a clear destination, you might be able to dispense with the initial route planning completely: just head in the right direction and follow the signs. Or if you are on a discovery road trip and don't have a specific destination, then heading out without a planned route is fine. In most cases, though, some level of advance route planning will save time. You just need to stay agile and be prepared to change your route along the way.

by Jim McBeath (noreply@blogger.com) at July 06, 2014 08:01 PM

July 05, 2014

Daniel Sobral

File generation with SBT

Someone asked me a question on IRC about file generation with SBT. I pointed out this link on the SBT documentation, and tried to briefly explain how it worked, but the subject got a little too long for IRC, so I thought I might make a blog post out of it. Good thing too, because there are some errors in that page.

Anyway, let's start. The goal here is that, when you compile a project, some source files are going to be generated by code, and then compiled together with the other ones you wrote. The person wanted the generator to have tests -- for such, I recommend writing an SBT plugin. I won't go further into that, and just explain the basic mechanism for generating source files.

If you inspect sourceGenerators, the setting mentioned by the SBT page, you'll see the follow description:


[info] Setting: scala.collection.Seq[sbt.Task[scala.collection.Seq[java.io.File]]]

That means it is a setting (that is, it's value is fixed by the configuration file). The setting contains a sequence, which means you can have more than one source generator. This sequence contains tasks, so each generator is a task, and that means they will be evaluated every time they get executed. The task must return a sequence of files, which I assumed, correctly, to be the list of files that were generated.

Now, you'll also see further down this information:

[info] Reverse dependencies:
[info] root/compile:managedSources

That means it is managedSources that uses sourceGenerators. And inspect uses managedSources shows this:

[info] root/compile:sources

In other words, whenever you compile, any source generators you have defined will be run. You can see as well that this is defined not only for compile, but also for test or any other compilation task you may have (I also have it:compile, for example).

So, with that in mind, we can start creating our generator. All the lines below can be placed in a build.sbt file, though you'll use plain Scala files with a plugin. This is just to quickly demonstrate how it's used. First, I'm going to create a task of sequence of files, which will be my generator:

lazy val generator = taskKey[Seq[File]]("My Generator")          

Don't ask me about why it's "lazy val" -- I'm simply repeating what I saw elsewhere. :) Also note that this uses the equals sign, not the colon-equals sign.

Now that we have a task key, we can assign a task to it. Since it's going to be of some complexity, let's start with:

generator in Compile := {

Now we can proceed with the rest. I'm going to define a method with the basic generating capabilities, and then call this method with some parameters as the body of this task. My generator will be pretty simple: given source and destination directories, copy all files ending with .txt from the source to the destination, changing the extension to .scala. Not very useful, perhaps, but enough to show how to get at some source, and produce something with it at a proper destination. So here is is:

  import _root_.java.nio.file.Files
  def generate(src: File, dst: File): Seq[File] = {
    val sourceFiles = Option(src.list) getOrElse Array() filter (_ endsWith ".txt")
    if (sourceFiles.nonEmpty) dst.mkdirs()
    for (file <- sourceFiles) yield {
      val srcFile = src / file
      val dstFile = dst / ((file take (file lastIndexOf '.')) + ".scala")
      Files.copy(srcFile.toPath, dstFile.toPath)
      dstFile
    }
  }

There's a couple of things here. First, note that I'm handling the case where there's no source files -- I tested it on a project with multiple subprojects, which resulted in annoying exceptions when trying out. Also, note that I create the target directory: even though SBT provided me with a target directory, it didn't actually create it. And I pass an option to replace existing files as well -- remember that it has to work without running clean every time. Finally, notice how I return the destination files, as required by sourceGenerators.

Now, for source and destination directories. There's a setting for the destination directory, which I also saw on the SBT docs linked page. As for the base directory, I'll get the base directory of the current project, and add a subdirectory to it. So my task ends with:

  generate(baseDirectory.value / "managed", sourceManaged.value)
}

All that remains is assigning it to sourceGenerators, which actually took some time because the documentation was wrong. In the end, I saw an email mentioning that the ".task" macro suggested in the SBT docs doesn't actually exist because it was already taken by something else. So trying to use that give strange errors. The actual syntax I had to use is this:

sourceGenerators in Compile <+= (generator in Compile)

To test, I wrong some stuff to a text file, intentionally meant to cause a compilation error, and ran the compile task with this result:

sinks:master>compile
[info] Compiling 1 Scala source to /Users/dsobral/src/sinks/target/scala-2.11/classes...
[error] /Users/dsobral/src/sinks/target/scala-2.11/src_managed/test.scala:1: expected class or object definition
[error] This file should cause a compilation error.
[error] ^
[error] one error found
[error] (root/compile:compile) Compilation failed
[error] Total time: 1 s, completed Jul 4, 2014 8:55:12 PM

by Daniel Sobral (noreply@blogger.com) at July 05, 2014 02:57 AM

July 04, 2014

Functional Jobs

Scala/Play Developer - Start-Up in Hamburg at Risk.Ident GmbH (Full-time)

Du hast Scala lieben gelernt? Du bist motiviert, Dich täglich in neue Herausforderungen zu stürzen? Du magst es, agil in einem Scrum-Team zusammenzuarbeiten, Ideen auszutauschen und eigene zu verwirklichen?

Dann sollten wir reden!

Wir bieten dir:

  • Agile Entwicklung in freundlicher Umgebung
  • Flache Hierarchien und Raum für kreative Entwicklung
  • Ein junges, sympathisches Team
  • Frisches Bio-Obst, um deinen Vitamin-Haushalt zu versorgen
  • sehr gute Bezahlung
  • Die Möglichkeit, mit uns gemeinsam zu wachsen und dich weiterzuentwickeln

Du bietest uns:

  • Scala-Kenntnisse
  • Gute Java-Kenntnisse
  • Sicherer Umgang mit dem Play Framework (Scala)
  • Fundierte Kenntnisse in Software- und Web-Development
  • Sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift

Kenntnisse von JavaScript und ExtJS/Sencha wären von Vorteil.

Wer wir sind:

Die Firma Risk Ident GmbH ist im Juni 2012 gegründet worden und bietet ihren Kunden Produkte im Bereich Online-Betrugsprävention an. Entwickelt wird die Software von einem sehr engagierten Team in einem wunderschön Büro in der Sternschanze.

Du fühlst dich einer neuen Herausforderung gewachsen und hast Lust in einem jungen, dynamischen Team mit Start-Up-Atmosphäre mitzuarbeiten?

Dann schick uns jetzt deine aussagekräftigen Bewerbungsunterlagen an recruitment [at] riskident [dot] com!

Get information on how to apply for this position.

July 04, 2014 08:54 AM

June 29, 2014

scala-lang.org

Scala 2.12 roadmap

Scala 2.12 will require Java 8. Here’s how we plan to make this transition as smooth as possible.

Goals

  • Minimize overhead of the transition for both users and library maintainers.
  • Continue Java 6 support for a while longer (only in Scala 2.11).
  • Track the Java platform evolution.

How

  • Upcoming 2.11.x releases will introduce the following experimental features (under a flag): Java 8-style closure compilation, Miguel’s new back-end & optimizer.
  • Hassle-free cross-building between 2.11 and 2.12 through full backward source compatibility (we won’t remove deprecated methods, but will support optional deprecation errors). Closely align 2.11 and 2.12 compiler and standard library code bases.
  • The official Scala 2.12 distribution will be built for Java 8 (and thus require it). The new back-end (and optimizer) will become the default.

Background

  • We can’t have one Scala binary version target two different Java versions without further artifactId name mangling. Even if maven did have support for specifying the required Java version, this fork would be a big burden on the eco-system. Thus, the split between required Java versions has to align with the Scala (binary) version.
  • We’ll check 2.11/2.12 cross-building by running the same community build on both versions. To improve backwards source compatibility, Scala 2.12 will not remove deprecated members. The 2.12 compiler will however (by default) emit deprecation errors for usage of members deprecated <= 2.11.0. (In principle, if we were to compile the 2.12 library for Java 6, it should be backwards binary compatible with 2.11.)
  • It’s important to keep up with the platform, even though Java 8’s MethodHandle-based support for closures may not immediately yield significant performance benefits (definitely reduces bytecode size, and thus likely compilation times, though). For platforms that don’t support Java 8 bytecode yet, two projects exist that rewrite Java 8 invokedynamic bytecode to Java 6 (retrolambda or Forax’s JSR292 backport). I’m not aware of the equivalent for default methods, but it’s feasible.

Shared features between Scala 2.11 (under a flag) & 2.12

  • Compile lambdas efficiently using method handles. (Separate compatibility module needed on 2.11 – see below.)

  • Java 8 interop (bidirectional):

    • Improve support for reading Java 8 bytecode (already in 2.11)
    • Improve and turn on SAM support by default (synthesize anonymous class java 8-style). This allows calling Java 8 higher-order methods seamlessly from Scala (already in 2.11 under -Xexperimental).
    • Compatibility module to let Java 8 call Scala higher-order methods.
  • Fully integrate Miguel’s new back-end & optimizer (refactor code, test & document in-depth, remove old back-end).

  • Style checker: an efficient, community-driven, platform for accurate coding style checking (built on top of the compiler).

  • Collections: improve test coverage, performance, documentation (& modularize?)

  • Improve documentation: focus on content. (This is a great place to start contributing, as well as on the tooling side of documentation.)

  • Continue infrastructure improvements (sbt build, improve pull request validation & release automation, bug tracker cleanup and automation).

Features exclusive to Scala 2.12: more Java 8 fun

Development of the following features starts in 2015. Since they are binary incompatible, they can’t be backported to 2.11.

  • Turn FunctionN into Functional Interfaces, so that Java 8 code can call higher-order methods in Scala without a wrapper.
  • Support for @interface traits, which are guaranteed to compile to Java interfaces (useful for interop, performance and binary compatibility). This is a generalization of the above feature.
  • Streams: integrate into Scala collections? (Anywhere from providing converters to replacing existing functionality.)
  • Use the JDK’s forkjoin library instead of embedding our own. (Switch the global default ExecutionContext to be backed by the ForkJoinPool.commonPool().)
  • SIP-20 Improved lazy val initialization (if time allows).

Timing

Scala 2.10.5 (Q4 2014) will be the last 2.10 release. We’re planning five 2.11.x releases in 2014, and a few more in 2015 (we’re still deciding on when to EOL 2.11.x). At Typesafe, 2.12 development will begin with infrastructure work in Q4 2014, with our development focus shifting to 2.12 in 2015.

2.10.004/01/2013First 2.10.x release
2.11.016/04/2014First 2.11.x release
2.11.119/05/2014
2.11.221/07/2014
2.11.329/09/2014
2.10.5Q4 2014Last 2.10.x release
2.12.0-M124/11/2014
2.11.4Dec 2014
2.12.0-M{2,3,4}Q{1,2,3} 2015quarterly 2.12.0-Mx releases
2.12.0-M5Oct 2015
2.12.0-RC1Nov 2015(1 year after M1)
2.12.0Jan 2016

During the development of Scala 2.11, we’ve made big steps forward in automating our release process and regression testing via our Community Build which builds 1M LOC of popular open source projects. Both the release script and the community builds are also run on a nightly basis.

As such, as of Scala 2.11.1, we’ve decided to skip Release Candidates for 2.x.y releases where y > 0. This enables more frequent minor releases on a predictable schedule.

(This roadmap was published on 30 June 2014.)

June 29, 2014 10:00 PM

June 19, 2014

Lalit Pant

<img src="http://feeds.feedburner.com/~r/lalitpant/~4/1CO3C2DAVls" height="1" width="1"/>

by Lalit Pant (noreply@blogger.com) at June 19, 2014 04:51 AM

May 30, 2014

Functional Jobs

Distributed Systems Engineer (Scala/JVM) at Fauna, Inc. (Full-time)

Distributed Systems Engineer

Founded by the team that scaled Twitter, Fauna is the next-generation database for social, mobile, and games. Join us and become part of a small team of the best software engineers in the world.

How we work

Our work environment is relaxed and individually oriented. We avoid meetings and pair programming. Instead, we require code reviews, and expect you to ask for help when you need it.

We are solving fundamental problems in computer science, so you must approach your work with both humility and rigor. And because our customers place an extreme degree of trust in us, we value pragmatism and personal responsibility very highly.

Benefits

We offer competitive equity, salary, and health benefits, two hypoallergenic cats, and the chance to change the industry forever.

We are family-friendly and support a healthy work-life balance, instead of the crunch and burnout cycle common to startups. In return we ask for your loyalty and hope that you can build your career at Fauna.

We are located in Berkeley, California.

Position

You have designed and implemented multiple distributed systems and operated them in production. You know that you can do better, given the opportunity.

You must have experience with:

  • Multiple statically-typed languages
  • Asynchronous programming with futures
  • Network services
  • Commutative replicated datatypes
  • Performance analysis

Scala experience is a plus, as is experience with consensus protocols such as Paxos.

Get information on how to apply for this position.

May 30, 2014 07:14 PM

Scala Developer – Atlassian Marketplace at Atlassian (Full-time)

  • Are you an innovative and talented web developer with a love of functional programming?
  • Are you interested in working on a fast-paced team that ships code to millions of users several times a week?
  • Want to build the future of enterprise app marketplaces at a company built by developers for developers?

As a full stack Scala developer at Atlassian you will join an engineering-led company and the award-winning leader in software development and collaboration tools with over 35,000 customers. We're looking for a results-driven Scala, scalaz, or Java developer to help stoke the fuel on our Atlassian Marketplace rocket ship. You will be working in a fast-paced environment on a service-based application where every line of code you write will be appreciated by a customer community of millions. You will be responsible for implementing, monitoring, and optimizing the code that powers Atlassian Marketplace's data model, analytics, service APIs, and reducing technical debt along with improvements in the SDLC.

As a developer in one of the fastest-growing businesses at Atlassian our products are scaling rapidly and this is the team with the most to gain. This is highly technical developer position where you will have autonomy to dream up and implement great features and services, based in both San Francisco, reporting to the Development Manager.

What you'll do

  • Help us build and scale a service that's used by every Atlassian customers around the world
  • Develop awesome new features front to back, ship multiple times per week Engage with other developers, front-end designers, and product managers
  • Liaise with the technical leads and architects to promote great software design and quality
  • Drive innovation by coming up with new and surprising ideas for our products and processes
  • Point out issues with the existing architecture and code, and clean it up by practicing BTCYS
  • Monitor, diagnose, and fine-tune the back-end, data model, and analytics performance of a large service

Required skills

  • Experience with Scala (or Java experience and a strong desire to learn Scala)
  • Deep architectural understanding of web applications
  • Great creative and innovative problem-solving skills
  • Initiative and the ability to work independently and in a team

Useful additions

  • Knowledge in some of the standard front-end technologies like modern HTML, CSS, JavaScript (we use jQuery and Backbone.js), REST, JSON
  • Knowledge of open source libraries, tools and frameworks, e.g. for logging, wiring, testing, building. The more the merrier!
  • Knowledge of NoSQL databases (especially MongoDB)
  • Interest to learn more about new languages and frameworks, excitement for new trends in application design

You have a great project you can point us to and highlight your personal contribution. You've handled a full time software development workload for at least three years. Show us something: your Bitbucket or Github repos, your technology focused Twitter or blog, etc...

Come and talk to us: join a fast-growing pre-IPO tech success with a culture of openness and honesty, and no bullshit or bureaucracy.

N.B. Over 100 over great jobs are open at Atlassian. Join us!

Get information on how to apply for this position.

May 30, 2014 06:47 PM

May 21, 2014

Paul Chiusano

This blog has moved

I am going to be moving future posts to pchiusano.io (feed at pchiusano.io/feed.xml), which currently runs off github pages. Although blogspot has served me well, I wanted a more lightweight process for creating posts that would encourage me to post more often. My previous process was to write posts in markdown, then convert those to HTML locally, and then finally copy this HTML into a new post on this blog. Using github pages, I can create, edit, and preview markdown posts directly in the browser if I want (or I can still work locally then push to the pchiusano.github.io repo).

I'll still keep this blog around for posterity, so all old links here will remain valid.

by Paul Chiusano (noreply@blogger.com) at May 21, 2014 04:07 PM

May 20, 2014

scala-lang.org

Scala 2.11.1 is now available!

We are very pleased to announce the release of Scala 2.11.1!

This release contains an important fix for serialization, which was broken in Scala 2.11.0 (SI-8549). The fix necessarily breaks serialization compatibility between 2.11.0 and 2.11.1 (this is separate from binary compatibility, which is maintained).

Users of distributed systems that rely on serialization to exchange objects (such as akka) should upgrade to Scala 2.11.1 (and akka 2.3.3) immediately. We also strongly recommend that libraries that themselves declare classes with @SerialVersionUID annotations release a new version and ask their Scala 2.11 users to upgrade.

We apologize for the breakage. We have included a new suite of tests that will ensure stability of serialization for the remainder of the 2.11.x series.

Compared to 2.11.0, this release fixes 26 issues. We reviewed and merged 51 pull requests.

The next minor Scala 2.11 release will be available in at most 2 months, or sooner if prompted by a serious issue.

The remainder of these release notes summarizes the 2.11.x series, and as such is not specific to this minor release.

Upgrading

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do not guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

Required Java Version

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.1, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support. The next major release, 2.12, will most likely target Java 8 by default.

New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

  • Collections

    • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
    • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
    • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
    • List has improved performance on map, flatMap, and collect.
    • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
  • Modularization

    • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
    • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
  • Reflection, macros and quasiquotes

    • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
    • See also this summary of the experimental side of the 2.11 development cycle.
    • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
  • Back-end

    • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
    • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
    • Branch elimination through constant analysis #2214
    • Scala.js, a separate project, provides an experimental JavaScript back-end for Scala 2.11. Note that it is not part of the standard Scala distribution.
    • Be more Avian- friendly.
  • Compiler Performance

    • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
    • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
    • Improve performance of reflection SI-6638
  • The IDE received numerous bug fixes and improvements!

  • REPL

  • Improved -Xlint warnings

    • Warn about unused private / local terms and types, and unused imports.
    • This will even tell you when a local var could be a val.
  • Slimming down the compiler

    • The experimental .NET backend has been removed from the compiler.
    • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
    • Search and destroy mission for ~5000 chunks of dead code. #1648

The Scala team and contributors fixed 655 bugs that are exclusive to Scala 2.11! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, according to git log --no-merges --oneline 2.11.x --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 115 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, A. P. Marki, Simon Ochsenreither, Den Shabalin, Miguel Garcia, James Iry, Iain McGinniss, Grzegorz Kossakowski, Rex Kerr, François Garillot, Vladimir Nikolaev, Eugene Vigdorchik, Lukas Rytz, Mirco Dotta, Rüdiger Klaehn, Antoine Gourlay, Raphael Jolly, Simon Schaefer, Kenji Yoshida, Paolo Giarrusso, Luc Bourlier, Hubert Plociniczak, Aleksandar Prokopec, Vlad Ureche, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Iulian Dragos, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Roberto Tyley, Olivier Blanvillain, Mark Harrah, James Ward, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, kurnevsky, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Marcin Kubala, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Roper, Heather Miller, Havoc Pennington, Guillaume Martres, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

Thank you all very much.

If you find any errors or omissions in these relates notes, please submit a PR!

Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

Available projects

The following Scala projects have already been released against 2.11! See also @jrudolph’s analysis of the availability of 2.11 builds of popular libraries (as well as which ones are missing); updated regularly.

We’d love to include your release in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"                   %% "scalacheck"                % "1.11.4"
"org.scalatest"                    %% "scalatest"                 % "2.1.7"
"org.scalautils"                   %% "scalautils"                % "2.1.7"
"com.typesafe.akka"                %% "akka-actor"                % "2.3.3"
"com.typesafe.scala-logging"       %% "scala-logging-slf4j"       % "2.1.2"
"org.scala-lang.modules"           %% "scala-async"               % "0.9.1"
"org.scalikejdbc"                  %% "scalikejdbc-interpolation" % "2.0.0"
"com.softwaremill.scalamacrodebug" %% "macros"                    % "0.4"
"com.softwaremill.macwire"         %% "macros"                    % "0.6"
"com.chuusai"                      %% "shapeless"                 % "1.2.4"
"com.chuusai"                      %% "shapeless"                 % "2.0.0"
"org.nalloc"                       %% "optional"                  % "0.1.0"
"org.scalaz"                       %% "scalaz-core"               % "7.0.6"
"com.assembla.scala-incubator"     %% "graph-core"                % "1.8.1"
"com.nocandysw"                    %% "platform-executing"        % "0.5.0"
"com.qifun"                        %% "stateless-future"          % "0.2.2"
"com.github.scopt"                 %% "scopt"                     % "3.2.0"
"com.dongxiguo"                    %% "commons-continuations"     % "0.2.2"
"com.dongxiguo"                    %% "memcontinuationed"         % "0.3.2"
"com.dongxiguo"                    %% "fastring"                  % "0.2.4"
"com.dongxiguo"                    %% "zero-log"                  % "0.3.5"
"com.github.seratch"               %% "ltsv4s"                    % "1.0.0"
"com.googlecode.kiama"             %% "kiama"                     % "1.6.0"
"org.scalamock"                    %% "scalamock-scalatest-support" % "3.1.1"
"org.scalamock"                    %% "scalamock-specs2-support"  % "3.1.1"
"com.github.nscala-time"           %% "nscala-time"               % "1.0.0"
"com.github.xuwei-k"               %% "applybuilder70"            % "0.1.3"
"com.github.xuwei-k"               %% "nobox"                     % "0.1.9"
"org.typelevel"                    %% "scodec-bits"               % "1.0.0"
"org.typelevel"                    %% "scodec-core"               % "1.0.0"
"com.sksamuel.scrimage"            %% "scrimage"                  % "1.3.20"
"net.databinder"                   %% "dispatch-http"             % "0.8.10"
"net.databinder"                   %% "unfiltered"                % "0.8.0"
"net.databinder"                   %% "unfiltered"                % "0.7.1"
"io.argonaut"                      %% "argonaut"                  % "6.0.4"
"org.specs2"                       %% "specs2"                    % "2.3.12"
"com.propensive"                   %% "rapture-core"              % "0.9.0"
"com.propensive"                   %% "rapture-json"              % "0.9.1"
"com.propensive"                   %% "rapture-io"                % "0.9.1"
"org.scala-stm"                    %% "scala-stm"                 % "0.7"
"org.parboiled"                    %% "parboiled-scala"           % "1.1.6"
"io.spray"                         %% "spray-json"                % "1.2.6"
"org.scala-libs"                   %% "scalajpa"                  % "1.5"
"com.casualmiracles"               %% "treelog"                   % "1.2.3"
"org.monifu"                       %% "monifu"                    % "0.9.3"
"org.mongodb"                      %% "casbah"                    % "2.7.2"
"com.clarifi"                      %% "f0"                        % "1.1.2"
"org.scalaj"                       %% "scalaj-http"               % "0.3.15"

The following libraries are specific to the 2.11.x minor release you’re using. If you depend on them, you should also cross-version fully!

"org.scalamacros"                   % "paradise"                  % "2.0.0" cross CrossVersion.full

Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

scalaVersion        := "2.11.1"

crossScalaVersions  := Seq("2.11.1", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
  CrossVersion.partialVersion(scalaVersion.value) match {
    case Some((2, scalaMajor)) if scalaMajor >= 11 =>
      libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.1"
    case _ =>
      libraryDependencies.value
  }
}

Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

  • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
  • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

  • SI-6809 Case classes without a parameter list are no longer allowed.
  • SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

  • SI-8549 Fix bad regression: no serialVersionUID field for classes annotated with @SerialVersionUID. The Scala standard library itself was a victim of this bug. As such, collections serialized in 2.11.0 will not be able to be deserialized in 2.11.1. This regression occurred in a failed attempt to fix a related bug in 2.10.x, SI-6988, whereby classes annotated with non literal UIDS, e.g. 0L - 123L, had no field generated.
  • SI-7296 Case classes with > 22 parameters are now allowed.
  • SI-3346 Implicit arguments of implicit conversions now guide type inference.
  • SI-6240 Thread safety of reflection API.
  • #3037 Experimental support for SAM synthesis.
  • #2848 Name-based pattern-matching.
  • SI-6169 Infer bounds of Java-defined existential types.
  • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
  • SI-5917 Improve public AST creation facilities.
  • SI-8063 Expose much needed methods in public reflection/macro API.
  • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).
  • SI-8157 Polymorphic methods also subject to restriction: only one overload may define default arguments

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

  • SI-7605 Procedure syntax (only under -Xfuture).
  • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
  • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
  • SI-8035 Automatic insertion of () on missing argument lists.
  • SI-6675 Auto-tupling in patterns.
  • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
  • SI-1503 Unsound type assumption for stable identifier and literal patterns.
  • SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

  • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
  • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
  • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
  • SI-3235 Deprecate round on Int and Long (#3581).
  • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

Meta

Note that so far we’ve only talked about the jars generated by scalac for the standard library and reflection. Our policies do not extend to the meta-issue: ensuring binary compatibility for bytecode generated from identical sources, by different version of scalac? (The same problem exists for compiling on different JDKs.) While we strive to achieve this, it’s not something we can test in general. Notable examples where we know meta-binary compatibility is hard to achieve: specialisation and the optimizer.

In short, if binary compatibility of your library is important to you, use MiMa to verify compatibility before releasing. Compiling identical sources with different versions of the scala compiler (or on different JVM versions!) could result in binary incompatible bytecode. This is rare, and we try to avoid it, but we can’t guarantee it will never happen.

Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

License clarification

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

May 20, 2014 10:00 PM

May 12, 2014

Ruminations of a Programmer

Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors

I was looking at the presentation that Dean Wampler made recently regarding domain driven design, anemic domain models and how using functional programming principles help ameliorate some of the problems there. There are some statements that he made which, I am sure made many OO practitioners chuckle. They contradict popular beliefs that encourage OOP as the primary way of modeling using DDD principles.

One statement that resonates a lot with my thought is "DDD encourages understanding of the domain, but don't implement the models". DDD does a great job in encouraging developers to understand the underlying domain model and ensuring a uniform vocabulary throughout the lifecycle of design and implementation. This is what design patterns also do by giving you a vocabulary that you can heartily exchange with your fellow developers without influencing any bit of implementation of the underlying pattern.

On the flip side of it, trying to implement DDD concepts using standard techniques of OO with joined state and behavior often gives you a muddled mutable model. The model may be rich from the point of view that you will find all concepts related to the particular domain abstraction baked in the class you are modeling. But it makes the class fragile as well since the abstraction becomes more locally focused losing the global perspective of reusability and composability. As a result when you try to compose multiple abstractions within the domain service layer, it becomes too much polluted with glue code that resolves the impedance mismatch between class boundaries.

So when Dean claims "Models should be anemic", I think he means to avoid this bundling of state and behavior within the domain object that gives you the false sense of security of richness of the model. He encourages the practice that builds domain objects to have the state only while you model behaviors using standalone functions.

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

One other strawman argument that I come across very frequently is that bundling state and behavior by modeling the latter as methods of the class increases encapsulation. If you are still a believer of this school of thought, have a look at Scott Meyer's excellent article which he wrote as early as 2000. He eschews the view that a class is the right level of modularization and encourages more powerful module systems as better containers of your domain behaviors.

As continuation of my series on functional domain modeling, we continue with the example of the earlier posts and explore the theme that Dean discusses ..

Here's the anemic domain model of the Order abstraction ..

case class Order(orderNo: String, orderDate: Date, customer: Customer, 
lineItems: Vector[LineItem], shipTo: ShipTo,
netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed)

In the earlier posts we discussed how to implement the Specification and Aggregate Patterns of DDD using functional programming principles. We also discussed how to do functional updates of aggregates using data structures like Lens. In this post we will use these as the building blocks, use more functional patterns and build larger behaviors that model the ubiquitous language of the domain. After all, one of the basic principles behind DDD is to lift the domain model vocabulary into your implementation so that the functionality becomes apparent to the developer maintaining your model.

The core idea is to validate the assumption that building domain behaviors as standalone functions leads to an effective realization of the domain model according to the principles of DDD. The base classes of the model contain only the states that can be mutated functionally. All domain behaviors are modeled through functions that reside within the module that represents the aggregate.

Functions compose and that's precisely how we will chain sequence of domain behaviors to build bigger abstractions out of smaller ones. Here's a small function that values an Order. Note it returns a Kleisli, which essentially gives us a composition over monadic functions. So instead of composing a -> b and b -> c, which we do with normal function composition, we can do the same over a -> m b and b -> m c, where m is a monad. Composition with effects if you may say so.

def valueOrder = Kleisli[ProcessingStatus, Order, Order] {order =>
val o = orderLineItems.set(
order,
setLineItemValues(order.lineItems)
)
o.lineItems.map(_.value).sequenceU match {
case Some(_) => right(o)
case _ => left("Missing value for items")
}
}

But what does that buy us ? What exactly do we gain from these functional patterns ? It's the power to abstract over families of similar abstractions like applicatives and monads. Well, that may sound a bit rhetoric and it needs a separate post to justify their use. Stated simply, they encapsulate effects and side-effects of your computation so that you can focus on the domain behavior itself. Have a look at the process function below - it's actually a composition of monadic functions in action. But all the machinery that does the processing of effects and side-effects are abstracted within the Kleisli itself so that the user level implementation is simple and concise.

With Kleisli it's the power to compose over monadic functions. Every domain behavior has a chance of failure, which we model using the Either monad - here ProcessingStatus is just a type alias for this .. type ProcessingStatus[S] = \/[String, S]. Using the Kleisli, we don't have to write any code for handling failures. As you will see below, the composition is just like the normal functions - the design pattern takes care of alternate flows.

Once the Order is valued, we need to apply discounts to qualifying items. It's another behavior that follows the same pattern of implementation as valueOrder.

def applyDiscounts = Kleisli[ProcessingStatus, Order, Order] {order =>
val o = orderLineItems.set(
order,
setLineItemValues(order.lineItems)
)
o.lineItems.map(_.discount).sequenceU match {
case Some(_) => right(o)
case _ => left("Missing discount for items")
}
}

Finally we check out the Order ..

def checkOut = Kleisli[ProcessingStatus, Order, Order] {order =>
val netOrderValue = order.lineItems.foldLeft(BigDecimal(0).some) {(s, i) =>
s |+| (i.value |+| i.discount.map(d => Tags.Multiplication(BigDecimal(-1)) |+| Tags.Multiplication(d)))
}
right(orderNetValue.set(order, netOrderValue))
}

And here's the service method that composes all of the above domain behaviors into the big abstraction. We don't have any object to instantiate. Just plain function composition that results in an expression modeling the entire flow of events. And it's the cleanliness of abstraction that makes the code readable and succinct.

def process(order: Order) = {
(valueOrder andThen
applyDiscounts andThen checkOut) =<< right(orderStatus.set(order, Validated))
}
In case you are interested in the full source code of this small example, feel free to take a peek at my github repo.

by Debasish Ghosh (noreply@blogger.com) at May 12, 2014 09:03 AM

April 24, 2014

Functional Jobs

functional software developer at OpinionLab (Full-time)

alt text

OpinionLab is seeking a Software Developer with strong agile skills to join our Chicago, IL based Product Development team in the West Loop.

As a member of our Product Development team, you will play a critical role in the architecture, design, development, and deployment of OpinionLab's web-based applications and services. You will be part of a high-visibility agile development team empowered to deliver high-quality, innovative, and market leading voice-of-customer (VoC) data acquisition and feedback intelligence solutions. If you thrive in a collaborative, fast-paced, get-it-done environment and want to be a part of one of Chicago's most innovative companies, we want to speak with you!

Key Responsibilities include:

  • Development of scalable data collection, storage, processing & distribution platforms & services.
  • Architecture and design of a mission critical SaaS platform and associated APIs.
  • Usage of and contribution to open-source technologies and framework.
  • Collaboration with all members of the technical staff in the delivery of best-in-class technology solutions.
  • Proficiency in Unix/Linux environments.
  • Work with UX experts in bringing concepts to reality.
  • Bridge the gap between design and engineering.
  • Participate in planning, review, and retrospective meetings (à la Scrum).

Desired Skills & Experience:

  • BDD/TDD, Pair Programming, Continuous Integration, and other agile craftsmanship practices
  • Desire to learn Clojure (if you haven't already)
  • Experience with both functional and object-oriented design and development within an agile environment
  • Polyglot programmer with mastery of one or more of the following languages: Lisp (Clojure, Common Lisp, Scheme), Haskell, Scala, Python, Ruby, JavaScript
  • Knowledge of one or more of: AWS, Lucene/Solr/Elasticsearch, Storm, Chef

Get information on how to apply for this position.

April 24, 2014 06:10 PM

April 20, 2014

scala-lang.org

Scala 2.11.0 is now available!

We are very pleased to announce the final release of Scala 2.11.0!

There have been no code changes since RC4, just improvements to documentation and version bump to the most recent stable version of Akka actors. Here’s the difference between the release and RC4.

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do not guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

  • Collections

    • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
    • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
    • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
    • List has improved performance on map, flatMap, and collect.
    • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
  • Modularization

    • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
    • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
  • Reflection, macros and quasiquotes

    • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
    • See also this summary of the experimental side of the 2.11 development cycle.
    • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
  • Back-end

    • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
    • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
    • Branch elimination through constant analysis #2214
    • Scala.js, a separate project, provides an experimental JavaScript back-end for Scala 2.11. Note that it is not part of the standard Scala distribution.
    • Be more Avian- friendly.
  • Compiler Performance

    • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
    • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
    • Improve performance of reflection SI-6638
  • The IDE received numerous bug fixes and improvements!

  • REPL

  • Improved -Xlint warnings

    • Warn about unused private / local terms and types, and unused imports.
    • This will even tell you when a local var could be a val.
  • Slimming down the compiler

    • The experimental .NET backend has been removed from the compiler.
    • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
    • Search and destroy mission for ~5000 chunks of dead code. #1648

The Scala team and contributors fixed 613 bugs that are exclusive to Scala 2.11.0! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 112 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Iain McGinniss, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, James Ward, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, kurnevsky, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

Thank you all very much.

If you find any errors or omissions in these relates notes, please submit a PR!

Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

Available projects

The following Scala projects have already been released against 2.11.0! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"                   %% "scalacheck"                % "1.11.3"
"org.scalatest"                    %% "scalatest"                 % "2.1.3"
"org.scalautils"                   %% "scalautils"                % "2.1.3"
"com.typesafe.akka"                %% "akka-actor"                % "2.3.2"
"com.typesafe.scala-logging"       %% "scala-logging-slf4j"       % "2.0.4"
"org.scala-lang.modules"           %% "scala-async"               % "0.9.1"
"org.scalikejdbc"                  %% "scalikejdbc-interpolation" % "2.0.0-beta1"
"com.softwaremill.scalamacrodebug" %% "macros"                    % "0.4"
"com.softwaremill.macwire"         %% "macros"                    % "0.6"
"com.chuusai"                      %% "shapeless"                 % "1.2.4"
"com.chuusai"                      %% "shapeless"                 % "2.0.0"
"org.nalloc"                       %% "optional"                  % "0.1.0"
"org.scalaz"                       %% "scalaz-core"               % "7.0.6"
"com.nocandysw"                    %% "platform-executing"        % "0.5.0"
"com.qifun"                        %% "stateless-future"          % "0.1.1"
"com.github.scopt"                 %% "scopt"                     % "3.2.0"
"com.dongxiguo"                    %% "fastring"                  % "0.2.4"
"com.github.seratch"               %% "ltsv4s"                    % "1.0.0"
"com.googlecode.kiama"             %% "kiama"                     % "1.5.3"
"org.scalamock"                    %% "scalamock-scalatest-support" % "3.0.1"
"org.scalamock"                    %% "scalamock-specs2-support"  % "3.0.1"
"com.github.nscala-time"           %% "nscala-time"               % "1.0.0"
"com.github.xuwei-k"               %% "applybuilder70"            % "0.1.2"
"com.github.xuwei-k"               %% "nobox"                     % "0.1.9"
"org.typelevel"                    %% "scodec-bits"               % "1.0.0"
"org.typelevel"                    %% "scodec-core"               % "1.0.0"
"com.sksamuel.scrimage"            %% "scrimage"                  % "1.3.20"
"net.databinder"                   %% "dispatch-http"             % "0.8.10"
"net.databinder"                   %% "unfiltered"                % "0.7.1"
"io.argonaut"                      %% "argonaut"                  % "6.0.4"
"org.specs2"                       %% "specs2"                    % "2.3.11"
"com.propensive"                   %% "rapture-core"              % "0.9.0"
"com.propensive"                   %% "rapture-json"              % "0.9.1"
"com.propensive"                   %% "rapture-io"                % "0.9.1"
"org.scala-stm"                    %% "scala-stm"                 % "0.7"

The following projects were released against 2.11.0-RC4, with an 2.11 build hopefully following soon:

"org.scalafx"            %% "scalafx"            % "8.0.0-R4"
"org.scalafx"            %% "scalafx"            % "1.0.0-R8"
"org.scalamacros"        %% "paradise"           % "2.0.0-M7"
"com.clarifi"            %% "f0"                 % "1.1.1"
"org.parboiled"          %% "parboiled-scala"    % "1.1.6"
"org.monifu"             %% "monifu"             % "0.4"

Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

scalaVersion        := "2.11.0"

crossScalaVersions  := Seq("2.11.0", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
  CrossVersion.partialVersion(scalaVersion.value) match {
    case Some((2, scalaMajor)) if scalaMajor >= 11 =>
      libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.1"
    case _ =>
      libraryDependencies.value
  }
}

Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

  • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
  • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

  • SI-6809 Case classes without a parameter list are no longer allowed.
  • SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

  • SI-7296 Case classes with > 22 parameters are now allowed.
  • SI-3346 Implicit arguments of implicit conversions now guide type inference.
  • SI-6240 Thread safety of reflection API.
  • #3037 Experimental support for SAM synthesis.
  • #2848 Name-based pattern-matching.
  • SI-6169 Infer bounds of Java-defined existential types.
  • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
  • SI-5917 Improve public AST creation facilities.
  • SI-8063 Expose much needed methods in public reflection/macro API.
  • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

  • SI-7605 Procedure syntax (only under -Xfuture).
  • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
  • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
  • SI-8035 Automatic insertion of () on missing argument lists.
  • SI-6675 Auto-tupling in patterns.
  • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
  • SI-1503 Unsound type assumption for stable identifier and literal patterns.
  • SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

  • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
  • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
  • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
  • SI-3235 Deprecate round on Int and Long (#3581).
  • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

Meta

Note that so far we’ve only talked about the jars generated by scalac for the standard library and reflection. Our policies do not extend to the meta-issue: ensuring binary compatibility for bytecode generated from identical sources, by different version of scalac? (The same problem exists for compiling on different JDKs.) While we strive to achieve this, it’s not something we can test in general. Notable examples where we know meta-binary compatibility is hard to achieve: specialisation and the optimizer.

In short, if binary compatibility of your library is important to you, use MiMa to verify compatibility before releasing. Compiling identical sources with different versions of the scala compiler (or on different JVM versions!) could result in binary incompatible bytecode. This is rare, and we try to avoid it, but we can’t guarantee it will never happen.

Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

License clarification

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

April 20, 2014 10:00 PM

Scala 2.11.0 is now available!

We are very pleased to announce the final release of Scala 2.11.0!

There have been no code changes since RC4, just improvements to documentation and version bump to the most recent stable version of Akka actors. Here’s the difference between the release and RC4.

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do not guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

  • Collections

    • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
    • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
    • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
    • List has improved performance on map, flatMap, and collect.
    • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
  • Modularization

    • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
    • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
  • Reflection, macros and quasiquotes

    • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
    • See also this summary of the experimental side of the 2.11 development cycle.
    • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
  • Back-end

    • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
    • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
    • Branch elimination through constant analysis #2214
    • Scala.js, a separate project, provides an experimental JavaScript back-end for Scala 2.11. Note that it is not part of the standard Scala distribution.
    • Be more Avian- friendly.
  • Compiler Performance

    • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
    • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
    • Improve performance of reflection SI-6638
  • The IDE received numerous bug fixes and improvements!

  • REPL

  • Improved -Xlint warnings

    • Warn about unused private / local terms and types, and unused imports.
    • This will even tell you when a local var could be a val.
  • Slimming down the compiler

    • The experimental .NET backend has been removed from the compiler.
    • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
    • Search and destroy mission for ~5000 chunks of dead code. #1648

The Scala team and contributors fixed 613 bugs that are exclusive to Scala 2.11.0! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 112 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Iain McGinniss, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, James Ward, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, kurnevsky, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

Thank you all very much.

If you find any errors or omissions in these relates notes, please submit a PR!

Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

Available projects

The following Scala projects have already been released against 2.11.0! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"                   %% "scalacheck"                % "1.11.3"
"org.scalatest"                    %% "scalatest"                 % "2.1.3"
"org.scalautils"                   %% "scalautils"                % "2.1.3"
"com.typesafe.akka"                %% "akka-actor"                % "2.3.2"
"com.typesafe.scala-logging"       %% "scala-logging-slf4j"       % "2.0.4"
"org.scala-lang.modules"           %% "scala-async"               % "0.9.1"
"org.scalikejdbc"                  %% "scalikejdbc-interpolation" % "2.0.0-beta1"
"com.softwaremill.scalamacrodebug" %% "macros"                    % "0.4"
"com.softwaremill.macwire"         %% "macros"                    % "0.6"
"com.chuusai"                      %% "shapeless"                 % "1.2.4"
"com.chuusai"                      %% "shapeless"                 % "2.0.0"
"org.nalloc"                       %% "optional"                  % "0.1.0"
"org.scalaz"                       %% "scalaz-core"               % "7.0.6"
"com.nocandysw"                    %% "platform-executing"        % "0.5.0"
"com.qifun"                        %% "stateless-future"          % "0.1.1"
"com.github.scopt"                 %% "scopt"                     % "3.2.0"
"com.dongxiguo"                    %% "fastring"                  % "0.2.4"
"com.github.seratch"               %% "ltsv4s"                    % "1.0.0"
"com.googlecode.kiama"             %% "kiama"                     % "1.5.3"
"org.scalamock"                    %% "scalamock-scalatest-support" % "3.0.1"
"org.scalamock"                    %% "scalamock-specs2-support"  % "3.0.1"
"com.github.nscala-time"           %% "nscala-time"               % "1.0.0"
"com.github.xuwei-k"               %% "applybuilder70"            % "0.1.2"
"com.github.xuwei-k"               %% "nobox"                     % "0.1.9"
"org.typelevel"                    %% "scodec-bits"               % "1.0.0"
"org.typelevel"                    %% "scodec-core"               % "1.0.0"
"com.sksamuel.scrimage"            %% "scrimage"                  % "1.3.20"
"net.databinder"                   %% "dispatch-http"             % "0.8.10"
"net.databinder"                   %% "unfiltered"                % "0.7.1"
"io.argonaut"                      %% "argonaut"                  % "6.0.4"
"org.specs2"                       %% "specs2"                    % "2.3.11"
"com.propensive"                   %% "rapture-core"              % "0.9.0"
"com.propensive"                   %% "rapture-json"              % "0.9.1"
"com.propensive"                   %% "rapture-io"                % "0.9.1"
"org.scala-stm"                    %% "scala-stm"                 % "0.7"

The following projects were released against 2.11.0-RC4, with an 2.11 build hopefully following soon:

"org.scalafx"            %% "scalafx"            % "8.0.0-R4"
"org.scalafx"            %% "scalafx"            % "1.0.0-R8"
"org.scalamacros"        %% "paradise"           % "2.0.0-M7"
"com.clarifi"            %% "f0"                 % "1.1.1"
"org.parboiled"          %% "parboiled-scala"    % "1.1.6"
"org.monifu"             %% "monifu"             % "0.4"

Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

scalaVersion        := "2.11.0"

crossScalaVersions  := Seq("2.11.0", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
  CrossVersion.partialVersion(scalaVersion.value) match {
    case Some((2, scalaMajor)) if scalaMajor >= 11 =>
      libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.1"
    case _ =>
      libraryDependencies.value
  }
}

Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

  • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
  • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

  • SI-6809 Case classes without a parameter list are no longer allowed.
  • SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

  • SI-7296 Case classes with > 22 parameters are now allowed.
  • SI-3346 Implicit arguments of implicit conversions now guide type inference.
  • SI-6240 Thread safety of reflection API.
  • #3037 Experimental support for SAM synthesis.
  • #2848 Name-based pattern-matching.
  • SI-6169 Infer bounds of Java-defined existential types.
  • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
  • SI-5917 Improve public AST creation facilities.
  • SI-8063 Expose much needed methods in public reflection/macro API.
  • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

  • SI-7605 Procedure syntax (only under -Xfuture).
  • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
  • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
  • SI-8035 Automatic insertion of () on missing argument lists.
  • SI-6675 Auto-tupling in patterns.
  • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
  • SI-1503 Unsound type assumption for stable identifier and literal patterns.
  • SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

  • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
  • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
  • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
  • SI-3235 Deprecate round on Int and Long (#3581).
  • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

Meta

Note that so far we’ve only talked about the jars generated by scalac for the standard library and reflection. Our policies do not extend to the meta-issue: ensuring binary compatibility for bytecode generated from identical sources, by different version of scalac? (The same problem exists for compiling on different JDKs.) While we strive to achieve this, it’s not something we can test in general. Notable examples where we know meta-binary compatibility is hard to achieve: specialisation and the optimizer.

In short, if binary compatibility of your library is important to you, use MiMa to verify compatibility before releasing. Compiling identical sources with different versions of the scala compiler (or on different JVM versions!) could result in binary incompatible bytecode. This is rare, and we try to avoid it, but we can’t guarantee it will never happen.

Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

License clarification

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

April 20, 2014 10:00 PM

April 15, 2014

Paul Chiusano

The future of software, the end of apps, and why UX designers should care about type theory

The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures... Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. […] The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be. 
Fred Brooks
Have you noticed how applications accrete feature after feature but never seem quite capable of doing everything we want? Software is a profound technology with enormous potential, and we stifle this potential with an antiquated metaphor. That metaphor is the machine. Software is now organized into static machines called applications. These applications ("appliances" is a better word) come equipped with a fixed vocabulary of actions, speak no common language, and cannot be extended, composed, or combined with other applications except with enormous friction. By analogy, what we have is a railway system where the tracks in each region are of differing widths, forcing trains and their cargo to be totally disassembled and then reassembled to transport anything across the country. As ridiculous as this sounds, this is roughly what we do at application boundaries: write explicit serialization and parsing code and lots of tedious (not to mention inefficient) code to deconstruct and reconstruct application data and functions.

This essay is a call to cast aside the broken machine metaphor and ultimately end the tyranny of applications. Applications can and ultimately should be replaced by programming environments, explicitly recognized as such, in which the user interactively creates, executes, inspects and composes programs. In this model, interaction with the computer is fundamentally an act of creation, the creative act of programming, of assembling language to express ideas, access information, and automate tasks. And software presents an opportunity to help humanity harness and channel "our vast imaginations, humming away, charged with creative energy".

Though the machine metaphor is wrong for software, it's also understandable why it's persisted. Before the discovery of software, arguably in the 1930s with Alan Turing's invention of the universal Turing machine, human technology had produced only physical artifacts like cash registers, engines, and light bulbs, built for some particular purpose and equipped with a largely fixed vocabulary of actions. With software came the idea that behavior and functionality could be specified as pure information, independent of the machine which interprets them. This raised novel possibilities. As pure information, a program is infinitely copyable at near zero cost, and in the internet age, capable of being transported anywhere on the planet almost instantaneously. A programmer can now miraculously turn thoughts to reality and deploy them around the globe by typing on a keyboard and clicking a few buttons. Though our mindset hasn't caught up yet, software relegated the machine (which once held primacy for the artifacts and technology produced by civilization) to an implementation detail, a substrate for the real technology--the specification of behavior in the form of a program.

We artificially limit the potential of this incredible technology by reserving a tiny, select group of people (programmers) to use its power build applications with largely fixed sets of actions (and we now put these machines on the internet too and call them "web applications"), and whose behaviors are not composable with other programs. Software let us escape the tyranny of the machine, yet we keep using it to build more prisons for our data and functionality!
Bob: Now, wait a minute. Applications usually have an API too, you know. If you really want programmability, why not just use the API? 
Alice: I wouldn't say 'usually', but okay, in theory let's suppose that's true. In practice, even though I'm a programmer and could in principle customize the applications I use, I don't because of the friction of doing so. Each application is a universe unto its own, with its own language and idiosyncratic modes of interaction. The situation hasn't improved with web applications, which have somewhat converged on ad hoc JSON+REST protocols as the lingua franca of application programmability. 
Bob: What's wrong with that? There are JSON parsers for every programming language under the sun! I even wrote a really fast, push-based, nonblocking parser in 5,000 lines of Java! It's pretty awesome. Check out how I optimized the parsing by hand-coding a switch-statement-based state machine for the parse table to reduce allocation rates and improve cache loc-- 
Alice: You're missing my point! Compare the overhead of calling a function in the 'native' language of your program vs calling a function exposed via JSON+REST. And no I don't mean the computational overhead, though that is a problem too. Within the language of your program, if I want to call a function returning a list of (employee, date) pairs, I simply invoke the function and get back the result. With JSON+REST, I get back a blob of text, which I then have to parse into a syntax tree and then convert back to some meaningful business objects. If I had that overhead and tedium for every function call I made I'd have quit programming long ago. 
Bob: Are you just saying you want more built in types for JSON, then? That's easy, I hear there's even a proposal to add a date type to JSON. 
Alice: And maybe in another fifteen years JSON will grow a standard for sending algebraic data types (they've been around for like 40 years, you know) and other sorts of values, like you know, functions
Bob: Functions?? Are you serious? You aren't talking about sending functions across the internet and just executing them, that's a huge security liability! 
Alice: Nevermind that for now. My point is-- 
Bob: --now wait a minute! You know, I was humoring you earlier by saying if you wanted programmability you could just use the application's API. Okay, for the sake of argument I'll grant that this can be rather inconvenient. But so what? You and I both know that 99.9% of users don't want to program or customize; they are perfectly happy with applications that do one thing, and do it well. 
Alice: I wouldn't say 'perfectly happy', I'd say that users are resigned to the notion that applications are machines with a fixed set of actions, and any limitations of these machines must simply be worked around via whatever tedium is required. But of course they would think that--we've never shown our users software that didn't work just like a machine, so how could we expect them to know about the wonderful, customizable universe of possibilities that we programmers get to play in every day? This isn't a good state of affairs, it's sad, and we ought to start doing something about it! It isn't hopeless--in fact, I find that if you get users in the right mindset they are positively incessant about wanting to customize their user experience and the actions supported by an application. It's human nature, our inner spirit of creativity and invention that can never be truly squelched! When we are shown something of use or interest to us, some piece of functionality or data, we begin thinking up possible variations and combinations that also interest us or seem useful. 
Bob: Okay, but let's be realistic. Do you really expect your users to be booting up text editors, running compilers, interpreting syntax and type errors and so forth just to get something accomplished? 
Alice: Of course not--no user should have to put up with the arcane programming environments that we professional programmers have to endure on a daily basis. Then again, we shouldn't have to either! Which is why the goal of software should not be to build machines, but to build pleasing, accessible programming environments that delight and inspire our users to creation while facilitating the sharing and reuse of programming ideas! Yes, we can and should optimize these environments for programming in various domains, which could include graphical views and so forth, but we should still place these environments in a unified framework rather than in walled gardens of functionality like the current batch of (web) appliances... er, 'applications'. 
Bob: So what are you saying? Get rid of Microsoft Word, Outlook, Gmail, Twitter, Facebook, and all the rest? 
Alice: Yes! Or rather, I would deconstruct these applications into libraries and grant users access to the functions and data types of these libraries within a grand unified programming environment. 
Bob: I want to talk more about that... but in any case, these applications you deride aren't just libraries, they are providing an intuitive interface to functionality that people find valuable, and we are going to need some sort of interface to this functionality that's better than a text editor and the command line. Providing this better interface is what applications do
Alice: If the interfaces provided by these applications are so intuitive, why are there rows and rows of 'Missing Manual' and 'For Dummies' books covering just about every application under the sun? Applications are failing at even their stated goal, but they do worse than that. Yes, an application is an (often terrible) interface to some library of functions, but it also traps this wonderful collection of potential building blocks in a mess of bureaucratic red tape. Any creator wishing to build atop or extend the functionality of an application faces a mountain of idiosyncratic protocols and data representations and some of the most tedious sort of programming imaginable: parsing, serializing, converting between different data representations, and error handling due to the inherent problem of having to pass through a dynamically typed and insufficiently expressive communication channel! And that's if an application even exposes any significant portion of its functionality through an actual API, which they often don't. We can do so much better! 
Bob: All right, I'll bite. Let's hear your story for how to organize the computing world without applications. 
Alice: I'm glad you asked...

The world without applications

The 'software as machine' view is so ingrained in people's thinking that it's hard to imagine organizing computing without some notion of applications. But let's return to first principles. Why do people use computers? People use computers in order to do and express things, to communicate with each other, to create, and to experience and interact with what others have created. People write essays, create illustrations, organize and edit photographs, send messages to friends, play card games, watch movies, comment on news articles, and they do serious work too--analyze portfolios, create budgets and track expenses, find plane flights and hotels, automate tasks, and so on. But what is important, what truly matters to people is simply being able to perform these actions. That each of these actions presently take place in the context of some 'application' is not in any way essential. In fact, I hope you can start to see how unnatural it is that such stark boundaries exist between applications, and how lovely it would be if the functionality of our current applications could be seamlessly accessed and combined with other functions in whatever ways we imagine. This sort of activity could be a part of the normal interaction that people have with computers, not something reserved only for 'programmers', and not something that requires navigating a tedious mess of ad hoc protocols, dealing with parsing and serialization, and all the other mumbo-jumbo that has nothing to do with the idea the user (programmer) is trying to express. The computing environment could be a programmable playground, a canvas in which to automate whatever tasks or activities the user wished.

Let me give an example of the problems with the current application-oriented model, and show what possibilities are put out of reach by our current framing of software. Please don't get bogged down in the details, I'm just trying to be illustrative here.

Suppose Carol and Dave are a young, conscientious couple intent on being disciplined about saving for retirement. But, they want to enjoy their time together as well, and so as part of their budget, which they manage using Mint.com, they allocate $200 per month to a virtual 'vacation' fund which accumulates from month to month. They also keep a shared Google doc in which they both jot down ideas for places they'd like to go and things they might like to do. Periodically, they take a vacation, drawing ideas from this doc. They make sure to keep the total cost of the trip under the amount that has accumulated into their vacation fund, and then attribute the cost of the trip to their vacation budget so it is deducted by Mint.com.

So far so good, but Carol, who is the planner in the relationship, notices that whenever she plans a vacation for the two of them she's doing a similar sort of thing. First, she opens up the Mint.com application to see how much money has accumulated in their vacation fund. Next, she opens up the Google doc to remind herself of the possible locations for trips they could take. Then, she goes to Kayak.com and searches for plane flights under the budget price, taking care to reserve enough leftover money for booking a hotel (say, on Hotels.com) and whatever other expenses are to be expected on the trip. It's a complex process, with lots of information and factors to keep straight, and it must be repeated each and every time they wish to plan a trip. Carol wonders if it would be possible to automate this process somehow, at least partially. She'd like a program that extracts a list of locations from their shared Google doc, then gets a list of possible flights to these locations and a list of possible hotels, then filters out any flight+hotel combinations that exceed the budget, then gives her the opportunity to interactively filter and browse through possible results, perhaps even allowing for interactive adjustments to certain base assumptions like the daily cost of miscellaneous expenses while on vacation, the dates of the trip, etc. This would save a lot of time and make the planning process more fun and creative.

Unfortunately, this sort of thing just isn't possible today. Kayak.com and Mint.com both lack APIs! Mint lets users download their transaction history, but this history doesn't indicate how much money has accumulated in each budget category. Kayak is even worse--it lacks a search API entirely.

So it seems Carol and Dave would be reduced to screen scraping if they want to programmatically build on Kayak and Mint. Google docs at least comes equipped with an API, but it's an ad hoc XML over REST API and there's friction associated with its use due to having to parse XML and so on. Overall, the friction and overhead to implementing this automation idea is way too high to justify it, so Carol doesn't bother and just does everything manually, or worse, gives up on a dream vacation!

Now let's imagine how things could be. Kayak, Mint, and Google docs would be, first and foremost, libraries rather than applications. Each might come equipped with custom views or editing environments for writing and executing certain 'shapes' of programs, but these views would not be their primary (or only) mode of interaction, as they are now. Instead, the collection of functions and data types in these libraries would be primary, and accessible within the unified programming environment of the user's desktop. This programming environment, moreover, would allow for transparent access to remote functionality, so users could write programs that call functions exposed via cloud services as well as functions defined locally.

If that example seems contrived, here's a more 'serious' one: a widget-making business has a customer relationship management (CRM) application that's used by the sales team. For each potential client, they make notes about what widget features clients are most interested in. The company also uses some project management software that lets them track features, improvements, and fixes to the product, and group these into releases. Whenever the company rolls out a new version of the widget product, the sales team would like to cross reference the list of changes that can be extracted from the project management software with the list of all the clients or leads that would be interested in these changes. Moreover, it would be nice to be able to take this list of potential clients who might be interested in newly released features and perhaps even assemble a form email calling out the particular features or improvements in the new version that that particular client was interested in. The sales team can of course add any personal touches to the emails before sending them to the potential clients.

Today, this process might end up being done manually, which doesn't scale very well if a business has hundreds or thousands of 'live' sales leads and a large number of features that they roll out with each release. Even if both the CRM and the project management app come with APIs, there is quite a bit of friction involved in writing a program that 'speaks' both APIs and handles all the boring concerns like parsing, deserialization, error handling, and so on.

I just made up these use cases, and I could come up with hundreds of others. No one piece of software 'does it all', and so individuals and businesses looking to automate or partially automate various tasks are often put in the position of having to integrate functionality across multiple applications, which is often painful or flat out impossible. The amount of lost productivity (or lost leisure time) on a global scale, both for individuals and business, is absolutely staggering.
Bob: All right, I think I finally see what you're getting at. These are very old ideas, you know. Haven't you ever heard of the Unix Philosophy? In fact, I could probably implement most of your use cases with 'a very small shell script'. 
Alice: You make it sound like Thompson and Ritchie invented the idea of composition. Mathematicians have been composing functions for hundreds (or even thousands) of years before that without making such a fuss about it or waving any sort of philosophical flag. But anyway, I would love to see you try to implement those tasks with a shell script, as you say. Have you ever tried reading a shell script written by someone else that's longer than 10 lines or so? I'm a professional programmer, well-trained in navigating all the arcane nonsense that's common in software, and a small part of me dies every time I have to write or read a bash script. I appreciate the spirit of the Unix Philosophy, but the implementation, of writing programs in a terrible language that read and write 'vaguely parseable text' leaves a lot to be desired. And JSON and XML aren't much better, either. 
Bob: So you really think that Carol and some sales guys are going to be writing programs, even if it is some theoretical future souped-up graphical programming environment? 
Alice: Why does that seem so unlikely to you? 
Bob: Because writing software is complicated! I know because I'm a professional programmer. We can't expect the masses to be writing the sort of complex program that we professional programmers produce. 
Alice: 'Complex programs'? You mean like Instagram? A website where you can post photos of kittens and subscribe to a feed of photos produced by other people? Or Twitter? Or any one of the 95% of applications which are just a CRUD interface to some data store? The truth is, if you strip applications of all their incidental complexity (largely caused by the artificial barriers at application boundaries), they are often extremely simple. But in all seriousness, why can't more people write programs? Millions of people use spreadsheets, an even more impoverished and arcane programming environment than what we could build. 
Bob: Maybe so, but I still don't think that a programming environment can ever be accessible to the majority of people. Spreadsheets are a good example--they are a rather accessible (if limited) form of programming, and not everyone uses the programmability of spreadsheets or even wants to! 
Alice: And two thousand years ago, most of the population was illiterate and arithmetic was considered too difficult for the average person, yet now we teach kids these things in elementary school. The truth is, we don't really know how many people might program if given a learnable programming environment and programming were reduced to its exhilarating, creative essence. I worry we have raised generations of programmers who are simply very good at tolerating bullshit and, paraphrasing Paul Lockhart, the most talented programmer of our time may be a waitress in Tulsa, Oklahoma who considers herself bad at computers. The spreadsheet brought programming (in a limited fashion) to millions of people, and a more accessible environment could bring it to millions or billions more. Who are you, with your limited imagination, to place a ceiling on how accessible programming could be? Well, the world is what we make of it, and I want to make a world in which applications die off, programming is no longer the awkward, arcane and tedious process it often is today, and where the internet is used to transparently share, use, and compose functionality across the internet. Which brings me to my next point...

What's wrong with the internet

The internet contains vast pools of data and functionality largely trapped within noncomposable applications all competing to be the center of the universe.

The economy of the internet is deeply broken. Have you ever wondered why the internet market is dominated by a few huge businesses like Google, Facebook, Twitter, etc? High transaction costs imposed by application boundaries have distorted the software economy, making it artificially expensive to integrate functionality from third-parties. This selects for larger businesses with the resources to develop and integrate functionality internally, which they do using composable libraries within their own application boundaries. From here, network effects due mostly to high switching costs (again, because of application boundary friction) sustain the positions of these larger market players. We essentially have a situation in which these larger market players own a significant portion of the network effects on the web. It would be preferable if ownership of these network effects were transferred to the public domain and businesses were forced to compete on their ideas and cleverness in describing these ideas in software, rather than competing as they do now on how well they can coax users into entering various walled gardens and keep them there with lock-in and high switching costs. With a unified programming environment spanning the web (I'll say more about this in another post), we could see these transaction costs and switching costs drop to nearly zero and a radical democratization of the internet market as ownership of these network effects is transferred to the public domain.

Unlike the production of many physical goods and services, software does not have any natural economies of scale. Arguably, there are diseconomies of scale with software--per unit of functionality, software becomes harder to write with the addition of more people, resources and code, because of the complexity of managing a large codebase and coordinating concurrent development. Large businesses with significant codebases fight a constant (losing) battle against entropy and employ armies of developers to maintain and make rudimentary additions to functionality. The 'economies of scale' with software are almost entirely due to artificially high transactions costs caused by the application-centered world view and the lack of a unified computational framework owned by the public. As a civilization, we would be better off if software could be developed by small, unrelated groups, with an open standard that allowed for these groups to trivially combine functionality produced anywhere on the network.

What I am proposing is a radical shift that could mean the end of huge internet businesses like Google and Facebook. Or rather, it means that Google and Facebook would be forced to compete on functionality with programmers all over the world, any of whom could write similar functionality that could be substituted for Google/Facebook functionality with literally zero switching costs. Oh, I might use Google as 'cloud provider', a place to stick my data and my computations, but this would be using Google as a commodity, an implementation detail, much the way I use the physical computer on which I type this right now. At any point, I could choose to transfer my data and personal functionality to another cloud provider, again with zero switching costs. And while we're at it, perhaps we could dispense with cloud providers entirely and replace them with a peer-to-peer network in which individuals share compute time and local storage!
Bob: I wouldn't knock Google, Twitter, and Instagram... they are serving literally millions of concurrent users. That's a serious technical challenge, you know. 
Alice: A serious technical challenge that has been created artificially! In the world I envision, the (limited) functionality of sites like Twitter could be written as a library and then used in a decentralized way by anyone connected to the internet. Writing such a library would require no servers, no capital, and could be completed by a programmer (or user) in a weekend! Think about it--if I write quicksort as a library function, is there any 'serious technical challenge' in making it possible for my function to be used by millions of users? No, of course not, because my function is pure information and can be transported all over the world and run by a billion people simultaneously, without my having to do anything other than put the code somewhere connected to the internet. But for some strange reason, if I write a function that operates on the follows-graph maintained in an (unnecessarily) centralized way by Twitter, I need to deal with all sorts of complexity if I want this function to be used by more than a few hundred people concurrently? Twitter (and Facebook, and Instagram, and Google) are solving problems created by the 'application as center of the universe' viewpoint that is so common today. 
Bob: Even so, I think you are vastly underestimating the complexity of the software that these companies produce. These companies are coordinating the activities of fleets of computers, doing error handling and recovery, and wrapping up often complex functionality in nice, usable interfaces (which by the way have seen many man months worth of tuning and testing) that you do nothing but complain about! We have it so easy! 
Alice: And yet, I still can't get Gmail to do even simple tasks like schedule an email to be sent later or batch up all incoming emails containing a certain phrase into a weekly digest! By the way, I just thought up those use cases on the spot, I could think of dozens more that aren't supported. The problem is, I don't want a machine, I want a toolkit, and Google keeps trying to sell me machines. Perhaps these machines are exquisitely crafted, with extensive tuning and so forth, but a machine with a fixed set of actions can never do all the things that I can imagine might be useful, and I don't want to wait around for Google to implement the functionality I desire as another awkward one-off 'feature' that's poorly integrated and just adds more complexity to an already bloated application. 
Bob: Okay, if you want a toolkit, how about Yahoo! Pipes, or If This Then That? Aren't those sort of what you want? 
Alice: Absolutely not. For one, I don't want my data and functionality locked up with a particular provider like that. I want an open platform. Who knows when Yahoo! might kill off Pipes or start changing inordinate sums of money for it, and who knows if ITTT is going to even be around a year from now given that they seem to have no business model. I would only use these services for throw-away code I don't care about. Have you ever noticed that all the programming languages people use voluntarily are open source? I think it's because no one wants their creations owned by anyone. But beyond that, the bigger reason I don't like these services is that I want a real programming language, with a real type system that lets me assemble complex functionality with ease and guides me through the process.

Why UX designers should care about type theory

Applications are bad enough in that they trap potentially useful building blocks for larger program ideas behind artificial barriers, but they fail at even their stated purpose of providing an 'intuitive' interface to whatever fixed set of actions and functionality its creators have imagined. Here is why: the problem is that for all but the simplest applications, there are multiple contexts within the application and there needs to be a cohesive story for how to present only 'appropriate' actions to the user and prevent nonsensical combinations based on context. This becomes serious business as the total number of actions offered by an application grows and the set of possible actions and contexts grows. As an example, if I just have selected a message in my inbox (this is a 'context'), the 'send' action should not be available, but if I am editing a draft of a message it should be. Likewise, if I have just selected some text, the 'apply Kodachrome style retro filter' action should not be available, since that only makes sense applied to a picture of some sort.

These are just silly examples, but real applications will have many more actions to organize and present to users in a context-sensitive way. Unfortunately, the way 'applications' tend to do this is with various ad hoc approaches that don't scale very well as more functionality is added--generally, they allow only a fixed set of contexts, and they hardcode what actions are allowed in each context. ('Oh, the send function isn't available from the inbox screen? Okay, I won't add that option to this static menu'; 'Oh, only an integer is allowed here? Okay, I'll add some error checking to this text input') Hence the paradox: applications never seem to do everything we want (because by design they can only support a fixed set of contexts and because how to handle each context must be explicitly hardcoded), and yet we also can't seem to easily find the functionality they do support (because the set of contexts and allowed actions is arbitrary and unguessable in a complex application).

There is already a discipline with a coherent story for how to handle concerns of what actions are appropriate in what contexts: type theory. Which is why I now (half) jokingly introduce Chiusano's 10th corollary:
Any sufficiently advanced user-facing program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a real programming language and type system.
Programming languages and type theory have largely solved the problem of how to constrain user actions to only 'appropriate' alternatives and present these alternatives to users in an exquisitely context-sensitive way. The fundamental contribution of a type system is to provide a compositional language for describing possible forms values can take, and to provide a fully generic program (the typechecker) for determining whether an action (a function) is applicable to a particular value (an argument to the function). Around this core idea we can build UI for autocompletion, perfectly appropriate context menus, program search, and so on. Type systems provide a striking, elegant solution to a problem that UX designers now solve in more ad hoc ways. These ad hoc methods don't scale and can never match what is possible when guided by an actual type system and the programming environment to go with it.

The work that remains is more around how to build meaningful, sensitive, real-time interfaces to the typechecker and integrate it within a larger programming environment supporting a mixture of graphical and textual program elements. Note that the richer the type system, the more mileage we get out of this approach.

Conclusion

I'll conclude with a great quote by Rúnar Bjarnason, explaining how we got to this point, and why it is deeply wrong:
In the early days of programming, there were no computers. The first programs were written, and executed, on paper. It wasn't until later that machines were first built that could execute programs automatically. 
During the ascent of computers, an industry of professional computer programmers emerged. Perhaps because early computers were awkward and difficult to use, the focus of these professionals became less thinking about programs and more manipulating the machine. 
Indeed, if you read the Wikipedia entry on "Computer Program", it tells you that computer programs are "instructions for a computer", and that "a computer requires programs to function". This is a curious position, since it's completely backwards. It implies that programming is done in order to make computers do things, as a primary. I’ll warrant that the article was probably written by a professional programmer. 
But why does a computer need to function? Why does a computer even exist? The reality is that computers exist solely for the purpose of executing programs. The machine is not a metaphysical primary. Reality has primacy, a program is a description, an abstraction, a proof of some hypothesis about an aspect of reality, and the computer exists to deduce the implications of that fact for the pursuit of human values.
Though the post talks specifically about not creating our programming languages in the machine's image, we should apply the same reasoning to the useful bundles of data and functionality that we now call 'applications'.

So there you have it. The machines are no longer primary. End the tyranny of applications!

by Paul Chiusano (noreply@blogger.com) at April 15, 2014 07:28 PM

April 07, 2014

scala-lang.org

Scala 2.11.0-RC4 is now available!

We are very pleased to announce Scala 2.11.0-RC4, the next release candidate of Scala 2.11.0! Download it now from scala-lang.org or via Maven Central.

Since RC3, we’ve fixed two blocker bugs, and admitted some final polish for macros and quasiquotes. Here’s the difference between RC4 and RC3.

Please do try out this release candidate to help us find any serious regressions before the final release. The next release candidate (or the final) will be cut on Friday April 11, if there are no unresolved blocker bugs. Our goal is to have the next release be the final – please help us make sure there are no important regressions!

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do no guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

For production use, we recommend the latest stable release, 2.10.4.

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

The Scala team and contributors fixed 613 bugs that are exclusive to Scala 2.11.0-RC4! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

Since the last RC, we fixed 11 issues via 37 merged pull requests.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 111 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Denys Shabalin, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Ward, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

Thank you all very much.

If you find any errors or omissions in these relates notes, please submit a PR!

Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

Available projects

The following Scala projects have already been released against 2.11.0-RC4! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"         %% "scalacheck"         % "1.11.3"
"com.typesafe.akka"      %% "akka-actor"         % "2.3.0"
"org.scalatest"          %% "scalatest"          % "2.1.3"
"org.scala-lang.modules" %% "scala-async"        % "0.9.1"
"org.scalafx"            %% "scalafx"            % "8.0.0-R4"
"com.chuusai"            %% "shapeless"          % "1.2.4"
"com.chuusai"            %% "shapeless"          % "2.0.0"
"org.scalamacros"        %% "paradise"           % "2.0.0-M7"
"org.scalaz"             %% "scalaz-core"        % "7.0.6"
"org.specs2"             %% "specs2"             % "2.3.10"

The following projects were released against 2.11.0-RC3, with an RC4 build hopefully following soon:

"org.scalafx"            %% "scalafx"            % "1.0.0-R8"
"com.github.scopt"       %% "scopt"              % "3.2.0"
"com.nocandysw"          %% "platform-executing" % "0.5.0"
"io.argonaut"            %% "argonaut"           % "6.0.3"
"com.clarifi"            %% "f0"                 % "1.1.1"
"org.parboiled"          %% "parboiled-scala"    % "1.1.6"
"com.sksamuel.scrimage"  %% "scrimage"           % "1.3.16"
"org.scala-stm"          %% "scala-stm"          % "0.7"
"org.monifu"             %% "monifu"             % "0.4"

Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

scalaVersion        := "2.11.0-RC4"

crossScalaVersions  := Seq("2.11.0-RC4", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
  CrossVersion.partialVersion(scalaVersion.value) match {
    case Some((2, scalaMajor)) if scalaMajor >= 11 =>
      libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.1"
    case _ =>
      libraryDependencies.value
  }
}

Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

  • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
  • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

  • SI-6809 Case classes without a parameter list are no longer allowed.
  • SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

  • SI-7296 Case classes with > 22 parameters are now allowed.
  • SI-3346 Implicit arguments of implicit conversions now guide type inference.
  • SI-6240 Thread safety of reflection API.
  • #3037 Experimental support for SAM synthesis.
  • #2848 Name-based pattern-matching.
  • SI-6169 Infer bounds of Java-defined existential types.
  • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
  • SI-5917 Improve public AST creation facilities.
  • SI-8063 Expose much needed methods in public reflection/macro API.
  • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

  • SI-7605 Procedure syntax (only under -Xfuture).
  • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
  • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
  • SI-8035 Automatic insertion of () on missing argument lists.
  • SI-6675 Auto-tupling in patterns.
  • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
  • SI-1503 Unsound type assumption for stable identifier and literal patterns.
  • SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

  • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
  • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
  • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
  • SI-3235 Deprecate round on Int and Long (#3581).
  • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

Meta

Note that so far we’ve only talked about the jars generated by scalac for the standard library and reflection. Our policies do not extend to the meta-issue: ensuring binary compatibility for bytecode generated from identical sources, by different version of scalac? (The same problem exists for compiling on different JDKs.) While we strive to achieve this, it’s not something we can test in general. Notable examples where we know meta-binary compatibility is hard to achieve: specialisation and the optimizer.

In short, if binary compatibility of your library is important to you, use MiMa to verify compatibility before releasing. Compiling identical sources with different versions of the scala compiler (or on different JVM versions!) could result in binary incompatible bytecode. This is rare, and we try to avoid it, but we can’t guarantee it will never happen.

Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

  • Collections

    • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
    • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
    • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
    • List has improved performance on map, flatMap, and collect.
    • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
  • Modularization

    • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
    • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
  • Reflection, macros and quasiquotes

    • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
    • See also this summary of the experimental side of the 2.11 development cycle.
    • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
  • Back-end

    • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
    • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
    • Branch elimination through constant analysis #2214
  • Compiler Performance

    • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2-M2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
    • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
    • Improve performance of reflection SI-6638
  • IDE * Numerous bug fixes and improvements!

  • REPL

  • Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val.

  • Slimming down the compiler

    • The experimental .NET backend has been removed from the compiler.
    • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
    • Search and destroy mission for ~5000 chunks of dead code. #1648

License clarification

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

A big thank you to all the contributors!

#Author
68<notextile>Adriaan Moors</notextile>
40<notextile>Iain McGinniss</notextile>
9<notextile>Jason Zaugg</notextile>
7<notextile>Denys Shabalin</notextile>
5<notextile>Eugene Burmako</notextile>
5<notextile>Simon Ochsenreither</notextile>
4<notextile>A. P. Marki</notextile>
1<notextile>Grzegorz Kossakowski</notextile>
1<notextile>François Garillot</notextile>

Commits and the issues they fixed since v2.11.0-RC3

Issue(s)CommitMessage
SI-84669fbac09<notextile>SI-8466 fix quasiquote crash on recursively iterable unlifting</notextile>
SI-7291, SI-84601c330e6<notextile>SI-8460 Fix regression in divergent implicit recovery</notextile>
SI-84605e795fc<notextile>Refactor handling of failures in implicit search</notextile>
SI-605491fb5c0<notextile>SI-6054 Modern eta-expansion examples that almost run</notextile>
SI-5610, SI-6069b3adae6<notextile>SI-6069 Preserve by-name during eta-expansion</notextile>
SI-60543fb5acc<notextile>SI-6054 don't use the defunct List.map2 in example</notextile>
SI-513671e45e0<notextile>SI-5136 correct return type for unapplySeq</notextile>
SI-6195aa6e4b3<notextile>SI-6195 stable members can only be overridden by stable members</notextile>
SI-56051921528<notextile>SI-5605 case class equals only considers first param section</notextile>
SI-605451f3ac1<notextile>SI-6054 correct eta-expansion in method value using placeholder syntax</notextile>
SI-51553c0d964<notextile>SI-5155 xml patterns do not support cdata, entity refs or comments</notextile>
SI-508984bba26<notextile>SI-5089 update definition implicit scope in terms of parts of a type</notextile>
SI-7313227e11d<notextile>SI-7313 method types of implicit and non-implicit parameter sections are never e</notextile>
SI-76727be2a6c<notextile>SI-7672 explicit top-level import of Predef precludes implicit one</notextile>
SI-5370aa64187<notextile>SI-5370 PartialFunction is a Function with queryable domain</notextile>
SI-49804615ec5<notextile>SI-4980 isInstanceOf does not do outer checks</notextile>
SI-1972f0b37c2<notextile>SI-1972 clarify getter and setter must be declared together</notextile>
SI-50865135bae<notextile>SI-5086 clean up EBNF</notextile>
SI-506532e0943<notextile>SI-5065 val/var is optional for a constructor parameter</notextile>
SI-520964b7338<notextile>SI-5209 correct precedence of infix operators starting with ! =</notextile>
SI-4249e197cf8<notextile>SI-4249 try/catch accepts expression</notextile>
SI-7937d614228<notextile>SI-7937 In for, semi before guard never required</notextile>
SI-458319ab789<notextile>SI-4583 UnicodeEscape does not allow multiple backslashes</notextile>
SI-83880bac64d<notextile>SI-8388 consistently match type trees by originals</notextile>
SI-8387f10d754<notextile>SI-8387 don't match new as a function application</notextile>
SI-83502fea950<notextile>SI-8350 treat single parens equivalently to no-parens in new</notextile>
SI-8451a0c3bbd<notextile>SI-8451 quasiquotes now handle quirks of secondary constructors</notextile>
SI-84379326264<notextile>SI-8437 macro runtime now also picks inherited macro implementations</notextile>
SI-84115e23a6a<notextile>SI-8411 match desugared partial functions</notextile>
SI-8200fa91b17<notextile>SI-8200 provide an identity liftable for trees</notextile>
SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
SI-8420b6a54a8<notextile>SI-8420 don't crash on unquoting of non-liftable native type</notextile>
SI-8428aa1e1d0<notextile>SI-8428 Refactor ConcatIterator</notextile>
SI-84281fa46a5<notextile>SI-8428 Fix regression in iterator concatenation</notextile>

Complete commit list!

shaTitle
2ba0453<notextile>Further tweak version of continuations plugin in scala-dist.pom</notextile>
9fbac09<notextile>SI-8466 fix quasiquote crash on recursively iterable unlifting</notextile>
afccae6<notextile>Refactor rankImplicits, add some more docs</notextile>
d345424<notextile>Refactor: keep DivergentImplicitRecovery logic together.</notextile>
1c330e6<notextile>SI-8460 Fix regression in divergent implicit recovery</notextile>
5e795fc<notextile>Refactor handling of failures in implicit search</notextile>
8489be1<notextile>Rebase #3665</notextile>
63783f5<notextile>Disable more of the Travis spec build for PR validation</notextile>
9cc0911<notextile>Minor typographical fixes for lexical syntax chapter</notextile>
f40d63a<notextile>Don't mention C#</notextile>
bb2a952<notextile>Reducing overlap of code and math.</notextile>
3a75252<notextile>Simplify CSS, bigger monospace to match math</notextile>
91fb5c0<notextile>SI-6054 Modern eta-expansion examples that almost run</notextile>
b3adae6<notextile>SI-6069 Preserve by-name during eta-expansion</notextile>
a89157f<notextile>Stubs for references chapter, remains TODO</notextile>
0b48dc2<notextile>Number files like chapters. Consolidate toc & preface.</notextile>
0f1dcc4<notextile>Minor cleanup in aisle README</notextile>
6ec6990<notextile>Skip step bound to fail in Travis PR validation</notextile>
12720e6<notextile>Remove scala-continuations-plugin from scala-library-all</notextile>
3560ddc<notextile>Start ssh-agent</notextile>
b102ffc<notextile>Disable strict host checking</notextile>
0261598<notextile>Jekyll generated html in spec/ directory</notextile>
71c1716<notextile>Add language to code blocks. Shorter Example title.</notextile>
abd0895<notextile>Fix #6: automatic section numbering.</notextile>
5997e32<notextile>#9 try to avoid double slashes in url</notextile>
09f2a26<notextile>require redcarpet 3.1 for user-friendly anchors</notextile>
f16ab43<notextile>use simple quotes, fix indent, escape dollar</notextile>
5629529<notextile>liquid requires SSA?</notextile>
128c5e8<notextile>sort pages in index</notextile>
8dba297<notextile>base url</notextile>
3df5773<notextile>formatting</notextile>
7307a03<notextile>TODO: number headings using css</notextile>
617bdf8<notextile>mathjax escape dollar</notextile>
a1275c4<notextile>TODO: binding example</notextile>
c61f554<notextile>fix indentation for footnotes</notextile>
52898fa<notextile>allow math in code</notextile>
827f5f6<notextile>redcarpet</notextile>
0bc3ec9<notextile>formatting</notextile>
2f3d0fd<notextile>Jekyll 2 config for redcarpet 3.1.1</notextile>
e6ecfd0<notextile>That was fun: fix internal links.</notextile>
d8a09e2<notextile>formatting</notextile>
9c757bb<notextile>fix some links</notextile>
453625e<notextile>wip: jekyllify</notextile>
3fb5acc<notextile>SI-6054 don't use the defunct List.map2 in example</notextile>
71e45e0<notextile>SI-5136 correct return type for unapplySeq</notextile>
aa6e4b3<notextile>SI-6195 stable members can only be overridden by stable members</notextile>
1921528<notextile>SI-5605 case class equals only considers first param section</notextile>
51f3ac1<notextile>SI-6054 correct eta-expansion in method value using placeholder syntax</notextile>
78d96ea<notextile>formatting: tables and headings</notextile>
3c0d964<notextile>SI-5155 xml patterns do not support cdata, entity refs or comments</notextile>
84bba26<notextile>SI-5089 update definition implicit scope in terms of parts of a type</notextile>
227e11d<notextile>SI-7313 method types of implicit and non-implicit parameter sections are never e</notextile>
7be2a6c<notextile>SI-7672 explicit top-level import of Predef precludes implicit one</notextile>
aa64187<notextile>SI-5370 PartialFunction is a Function with queryable domain</notextile>
4615ec5<notextile>SI-4980 isInstanceOf does not do outer checks</notextile>
f0b37c2<notextile>SI-1972 clarify getter and setter must be declared together</notextile>
5135bae<notextile>SI-5086 clean up EBNF</notextile>
32e0943<notextile>SI-5065 val/var is optional for a constructor parameter</notextile>
64b7338<notextile>SI-5209 correct precedence of infix operators starting with ! =</notextile>
1130d10<notextile>formatting</notextile>
e197cf8<notextile>SI-4249 try/catch accepts expression</notextile>
622ffd4<notextile>wip</notextile>
d614228<notextile>SI-7937 In for, semi before guard never required</notextile>
507e58b<notextile>github markdown: tables</notextile>
09c957b<notextile>github markdown: use ###### for definitions and notes</notextile>
9fb8276<notextile>github markdown: use ###### for examples</notextile>
19ab789<notextile>SI-4583 UnicodeEscape does not allow multiple backslashes</notextile>
1ca2095<notextile>formatting</notextile>
b75812d<notextile>Mention WIP in README</notextile>
9031467<notextile>Catch up with latex spec.</notextile>
21ca2cf<notextile>convert {\em } to _..._</notextile>
37ef8a2<notextile>github markdown: numbered definition</notextile>
b44c598<notextile>github markdown: code blocks</notextile>
9dec37b<notextile>github markdown: drop css classes</notextile>
df2f3f7<notextile>github markdown: headers</notextile>
839fd6e<notextile>github markdown: numbered lists</notextile>
fa4aba5<notextile>new build options</notextile>
b71a2c1<notextile>updated README.md</notextile>
d8f0a93<notextile>rendering error fix</notextile>
ab8f966<notextile>added tex source build</notextile>
a80a894<notextile>Typographical adjustments</notextile>
34eb920<notextile>Fix fonts to enable both old-style and lining numerals</notextile>
8f1bd7f<notextile>Over-wide line fix for types grammar</notextile>
9cee383<notextile>Replaced build script with make file</notextile>
3f339c8<notextile>Minor pagination tweak</notextile>
50ce322<notextile>Miscellaneous cleanups:</notextile>
2311e34<notextile>fix poorly labeled section links fix over-wide grammar text</notextile>
e7ade69<notextile>Use the original type faces</notextile>
2c21733<notextile>Adjust layout</notextile>
54273a3<notextile>set Luxi Mono and Heuristica (Utopia) as the default fonts for monospace and mai</notextile>
1352994<notextile>use \sigma instead of raw unicode character in math mode, as it does not render </notextile>
7691d7f<notextile>added build step for ebook</notextile>
9a8134a<notextile>PDF building now working with pandoc 1.10.1</notextile>
ab50eec<notextile>using standard markdown for numbered lists (hopefully better rendering on github</notextile>
94198c7<notextile>fixed reference to class diagram fixed undefined macro</notextile>
ea177a2<notextile>fixed inline code block</notextile>
cdaeb84<notextile>fixed inline code blocks fixed math array for PDF output</notextile>
1ec5965<notextile>fixed inline code blocks removed LaTeX labels converted TODOs to comments</notextile>
3404f54<notextile>fix for undefined macro</notextile>
990d4f0<notextile>fixed undefined macros and converted comment block</notextile>
580d5d6<notextile>fix for unicode character conversion error in producing PDF fix for grammar code</notextile>
1847283<notextile>standard library chapter converted</notextile>
7066c70<notextile>converted xml expressions and user defined annotations chapters</notextile>
dc958b2<notextile>fixed minor math layout and unsupported commands</notextile>
2f67c76<notextile>Converted pattern matching chapter</notextile>
a327584<notextile>Implicit Parameters and Values chapter converted</notextile>
a368e9c<notextile>expressions chapter converted, some math-mode errors still exist</notextile>
fd283b6<notextile>conversion of classes and objects chapter</notextile>
79833dc<notextile>converted syntax summary</notextile>
b871ec6<notextile>basic declarations and definitions chapter converted, needs second-pass review.</notextile>
bb53357<notextile>types chapter fully converted. Added link to jquery and some experimental code f</notextile>
3340862<notextile>accidentally committed OS resource</notextile>
eb3e02a<notextile>MathJAX configuration for inline math inside code blocks</notextile>
a805b04<notextile>interim commit of conversion of types chapter</notextile>
7d50d8f<notextile>- Grouping of text for examples in Lexical Syntax chapter fixed - Style of examp</notextile>
f938a7c<notextile>Identifiers, Names and Scopes chapter converted. Minor CSS tweaks to make exampl</notextile>
7c16776<notextile>removed some stray LaTeX commands from Lexical Syntax chapter, and a back-refere</notextile>
82435f1<notextile>experimental restyling of examples to try and look a bit more like the original </notextile>
4f86c27<notextile>fixed missing newline between example text and delimited code expression</notextile>
5e2a788<notextile>preface and lexical syntax chapter converted, other chapters split into their ow</notextile>
0bac64d<notextile>SI-8388 consistently match type trees by originals</notextile>
f10d754<notextile>SI-8387 don't match new as a function application</notextile>
2fea950<notextile>SI-8350 treat single parens equivalently to no-parens in new</notextile>
a0c3bbd<notextile>SI-8451 quasiquotes now handle quirks of secondary constructors</notextile>
9326264<notextile>SI-8437 macro runtime now also picks inherited macro implementations</notextile>
5e23a6a<notextile>SI-8411 match desugared partial functions</notextile>
f9a5880<notextile>introduces Mirror.typeOf</notextile>
fa91b17<notextile>SI-8200 provide an identity liftable for trees</notextile>
db300d4<notextile>[backport] no longer warns on calls to vampire macros</notextile>
a16e003<notextile>Bump version to 2.10.5 for nightly builds.</notextile>
5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile>
806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile>
b6a54a8<notextile>SI-8420 don't crash on unquoting of non-liftable native type</notextile>
aa1e1d0<notextile>SI-8428 Refactor ConcatIterator</notextile>
1fa46a5<notextile>SI-8428 Fix regression in iterator concatenation</notextile>
ff02fda<notextile>Bump versions for 2.11.0-RC3</notextile>

April 07, 2014 10:00 PM

April 06, 2014

Ruminations of a Programmer

Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

In the last post I looked at a pattern that enforces constraints to ensure domain objects honor the domain rules. But what exactly is a domain object ? What should be the granularity of an object that my solution model should expose so that it makes sense to a domain user ? After all, the domain model should speak the language of the domain. We may have a cluster of entities modeling various concepts of the domain. But only some of them can be published as abstractions to the user of the model. The others can be treated as implementation artifacts and are best hidden under the covers of the published ones.

An aggregate in domain driven design is a published abstraction that provides a single point of interaction to a specific domain concept. Considering the classes I introduced in the last post, an Order is an aggregate. It encapsulates the details that an Order is composed of in the real world (well, only barely in this example, which is only for illustration purposes :-)).

Note an aggregate can consist of other aggregates - e.g. we have a Customer instance within an Order. Eric Evans in his book on Domain Driven Design provides an excellent discussion of what constitutes an Aggregate.

Functional Updates of Aggregates with Lens


This is not a post about Aggregates and how they fit in the realm of domain driven design. In this post I will talk about how to use some patterns to build immutable aggregates. Immutable data structures offer a lot of advantages, so build your aggregates ground up as immutable objects. Algebraic Data Type (ADT) is one of the patterns to build immutable aggregate objects. Primarily coming from the domain of functional programming, ADTs offer powerful techniques of pattern matching that help you match values against patterns and bind variables to successful matches. In Scala we use case classes as ADTs that give immutable objects out of the box ..

case class Order(orderNo: String, orderDate: Date, customer: Customer, 
lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None,
status: OrderStatus = Placed)

Like all good aggregates, we need to provide a single point of interaction to users. Of course we can access all properties using accessors of case classes. But what about updates ? We can update the orderNo of an order like this ..

val o = Order( .. )
o.copy(orderNo = newOrderNo)

which gives us a copy of the original order with the new order no. We don't mutate the original order. But anybody having some knowledge of Scala will realize that this becomes pretty clunky when we have to deal with nested object updation. e.g in the above case, ShipTo is defined as follows ..

case class Address(number: String, street: String, city: String, zip: String)
case class ShipTo(name: String, address: Address)

So, here you go in order to update the zip code of a ShipTo ..

val s = ShipTo("abc", Address("123", "Monroe Street", "Denver", "80233"))
s.copy(address = s.address.copy(zip = "80231"))

Not really pleasing and can go off bounds in comprehensibility pretty soon.

In our domain model we use an abstraction called a Lens for updating Aggregates. In very layman's terms, a lens is an encapsulated get and set combination. The get extracts a small part from a larger whole, while the set transforms the larger abstraction with a smaller part taken as a parameter.

case class Lens[A, B](get: A => B, set: (A, B) => A)

This is a naive definition of a Lens in Scala. Sophisticated lens designs go a long way to ensure proper abstraction and composition. scalaz provides one such implementation out of the box that exploits the similarity in structure between the get and the set to generalize the lens definition in terms of another abstraction named Store. As it happens so often in functional programming, Store happens to abstract yet another pattern called the Comonad. You can think of a Comonad as the inverse of a Monad. But in case you are more curious, and have wondered how lenses form "the Coalgebras for the Store Comonad", have a look at the 2 papers here and here.

Anyway for us mere domain modelers, we will use the Lens implementation as in scalaz .. here's a lens that helps us update the OrderStatus within an Order ..

val orderStatus = Lens.lensu[Order, OrderStatus] (
(o, value) => o.copy(status = value),
_.status
)

and use it as follows ..

val o = Order( .. )
orderStatus.set(o, Placed)

will change the status field of the Order to Placed. Let's have a look at some of the compositional properties of a lens which help us write readable code for functionally updating nested structures.

Composition of Lenses

First let's define some individual lenses ..

// lens for updating a ShipTo of an Order
val orderShipTo = Lens.lensu[Order, ShipTo] (
(o, sh) => o.copy(shipTo = sh),
_.shipTo
)

// lens for updating an address of a ShipTo
val shipToAddress = Lens.lensu[ShipTo, Address] (
(sh, add) => sh.copy(address = add),
_.address
)

// lens for updating a city of an address
val addressToCity = Lens.lensu[Address, String] (
(add, c) => add.copy(city = c),
_.city
)

And now we compose them to define a lens that directly updates the city of a ShipTo belonging to an Order ..

// compositionality FTW
def orderShipToCity = orderShipTo andThen shipToAddress andThen addressToCity

Now updating a city of a ShipTo in an Order is as simple and expressive as ..

val o = Order( .. )
orderShipToCity.set(o, "London")

The best part of using such compositional data structures is that it makes your domain model implementation readable and expressive to the users of your API. And yet your aggregate remains immutable.

Let's look at another use case when the nested object is a collection. scalaz offers partial lenses that you can use for such composition. Here's an example where we build a lens that updates the value member within a LineItem of an Order. A LineItem is defined as ..

case class LineItem(item: Item, quantity: BigDecimal, value: Option[BigDecimal] = None, 
discount: Option[BigDecimal] = None)

and an Order has a collection of LineItems. Let's define a lens that updates the value within a LineItem ..

val lineItemValue = Lens.lensu[LineItem, Option[BigDecimal]] (
(l, v) => l.copy(value = v),
_.value
)

and then compose it with a partial lens that helps us update a specific item within a vector. Note how we convert our lineItemValue lens to a partial lens using the unary operator ~ ..

// a lens that updates the value in a specific LineItem within an Order 
def lineItemValues(i: Int) = ~lineItemValue compose vectorNthPLens(i)

Now we can use this composite lens to functionally update the value field of each of the items in a Vector of LineItems using some specific business rules ..

(0 to lis.length - 1).foldLeft(lis) {(s, i) => 
val li = lis(i)
lineItemValues(i).set(s, unitPrice(li.item).map(_ * li.quantity)).getOrElse(s)
}

In this post we saw how we can handle aggregates functionally and without any in-place mutation. This keeps the model pure and helps us implement domain models that has sane behavior even in concurrent settings without any explicit use of locks and semaphores. In the next post we will take a look at how we can use such compositional structures to make the domain model speak the ubiquitous language of the domain - another pattern recommended by Eric Evans in domain driven design.

by Debasish Ghosh (noreply@blogger.com) at April 06, 2014 03:19 PM

April 04, 2014

Quoi qu'il en soit

Becoming really rich with Scala

Update: 31 March 2014: This article uses Scala 2.8
  • Updated version of this  code using scala 2.10 available in gtihub
  •   git clone https://github.com/azzoti/get-rich-with-scala.git
  • Its an eclipse scala ide project and a maven project
  • Can be run with:
  •   mvn scala:run -DmainClass=etf.analyzer.Program

Becoming really rich with C# was a great example of what's coming in the version of C# in Visual Studio 2010 and it makes it obvious that C# is leaving Java in the dust. Now that the Visual Studio 2010 beta 2 is available, you can download Luca's code and try it out.

For this post, I've translated the C# code into Scala while trying to preserve the C# original style. To do this I have added support code in order to match the C# style. This was easy and shows off Scala's extensibility.

The features/libraries or libraries added or used to do this are:
  • Scala-Time: A Java Joda Time library wrapper.
  • The "using" block from Martin Odersky's FOSDEM '09 talk
  • An EventHandler class for simulating C# Events
  • The Jetty HTTPClient from Eclipse that I wrapped to resemble the C# WebClient api.
  • Artithmetic operations for Option[Double]. (Option[Double] is Scala's equivalent of the C# nullable type double? In C# you can use double? variables in expresssions, with expressions returning null if any part of the expression is null. In Scala, you can't use Option[Double] in arithmetic expressions out of the box, but its very easy to add this ability in a small library.
  • The Scala code is written with the latest 2.8 pre version of Scala and uses one or two features from its latest standard library not present in the latest stable release.

While the Scala is slighly shorter than the C# code, it is supported by extra code or libraries that I have found or had to write. C# already has using blocks, Events and reasonable datetime management, a WebClient and Nullable double types that handle arithmentic operations sensibly.

For whatever reason, the Scala code runs much faster than the c# code, but there is a large amount of internet access involved and I suspect that the C# web client should be configured to use more threads. [Update: Luca just suggested I comment out the C# line ServicePointManager.DefaultConnectionLimit = 10; and this does indeed make the C# code much faster.]



Original C#Scala
See notes after the table

                                                                                
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
using System.IO;

namespace ETFAnalyzer {

struct Event {
internal Event(DateTime date, double price) { Date = date; Price = price; }
internal readonly DateTime Date;
internal readonly double Price;
}
class Summary {
internal Summary(string ticker, string name, string assetClass,
string assetSubClass, double? weekly, double? fourWeeks,
double? threeMonths, double? sixMonths, double? oneYear,
double? stdDev, double price, double? mav200) {
Ticker = ticker;
Name = name;
AssetClass = assetClass;
AssetSubClass = assetSubClass;
// Abracadabra ...
LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4;
Weekly = weekly;
FourWeeks = fourWeeks;
ThreeMonths = threeMonths;
SixMonths = sixMonths;
OneYear = oneYear;
StdDev = stdDev;
Mav200 = mav200;
Price = price;
}
internal readonly string Ticker;
internal readonly string Name;
internal readonly string AssetClass;
internal readonly string AssetSubClass;
internal readonly double? LRS;
internal readonly double? Weekly;
internal readonly double? FourWeeks;
internal readonly double? ThreeMonths;
internal readonly double? SixMonths;
internal readonly double? OneYear;
internal readonly double? StdDev;
internal readonly double? Mav200;
internal double Price;

internal static void Banner() {
Console.Write("{0,-6}", "Ticker");
Console.Write("{0,-50}", "Name");
Console.Write("{0,-12}", "Asset Class");
Console.Write("{0,4}", "RS");
Console.Write("{0,4}", "1Wk");
Console.Write("{0,4}", "4Wk");
Console.Write("{0,4}", "3Ms");
Console.Write("{0,4}", "6Ms");
Console.Write("{0,4}", "1Yr");
Console.Write("{0,6}", "Vol");
Console.WriteLine("{0,2}", "Mv");
}

internal void Print() {

Console.Write("{0,-6}", Ticker);
Console.Write("{0,-50}", new String(Name.Take(48).ToArray()));
Console.Write("{0,-12}", new String(AssetClass.Take(10).ToArray()));
Console.Write("{0,4:N0}", LRS * 100);
Console.Write("{0,4:N0}", Weekly * 100);
Console.Write("{0,4:N0}", FourWeeks * 100);
Console.Write("{0,4:N0}", ThreeMonths * 100);
Console.Write("{0,4:N0}", SixMonths * 100);
Console.Write("{0,4:N0}", OneYear * 100);
Console.Write("{0,6:N0}", StdDev * 100);
if (Price <= Mav200)
Console.WriteLine("{0,2}", "X");
else
Console.WriteLine();
}
}

class TimeSeries {
internal readonly string Ticker;
readonly DateTime _start;
readonly Dictionary<DateTime, double> _adjDictionary;
readonly string _name;
readonly string _assetClass;
readonly string _assetSubClass;

internal TimeSeries(string ticker, string name, string assetClass, string assetSubClass, IEnumerable<event> events) {
Ticker = ticker;
_name = name;
_assetClass = assetClass;
_assetSubClass = assetSubClass;
_start = events.Last().Date;
_adjDictionary = events.ToDictionary(e => e.Date, e => e.Price);
}

bool GetPrice(DateTime when, out double price, out double shift) {
// To nullify the effect of hours/min/sec/millisec being different from 0
when = new DateTime(when.Year, when.Month, when.Day);
var found = false;
shift = 1;
double aPrice = 0;
while (when >= _start && !found) {
if (_adjDictionary.TryGetValue(when, out aPrice)) {
found = true;
}
when = when.AddDays(-1);
shift -= 1;
}
price = aPrice;
return found;
}

double? GetReturn(DateTime start, DateTime end) {
var startPrice = 0.0;
var endPrice = 0.0;
var shift = 0.0;
var foundEnd = GetPrice(end, out endPrice, out shift);
var foundStart = GetPrice(start.AddDays(shift), out startPrice, out shift);
if (!foundStart || !foundEnd)
return null;
else
return endPrice / startPrice - 1;
}

internal double? LastWeekReturn() {
return GetReturn(DateTime.Now.AddDays(-7), DateTime.Now);
}
internal double? Last4WeeksReturn() {
return GetReturn(DateTime.Now.AddDays(-28), DateTime.Now);
}
internal double? Last3MonthsReturn() {
return GetReturn(DateTime.Now.AddMonths(-3), DateTime.Now);
}
internal double? Last6MonthsReturn() {
return GetReturn(DateTime.Now.AddMonths(-6), DateTime.Now);
}
internal double? LastYearReturn() {
return GetReturn(DateTime.Now.AddYears(-1), DateTime.Now);
}
internal double? StdDev() {
var now = DateTime.Now;
now = new DateTime(now.Year, now.Month, now.Day);
var limit = now.AddYears(-3);
var rets = new List<double>();
while (now >= _start.AddDays(12) && now >= limit) {
var ret = GetReturn(now.AddDays(-7), now);
rets.Add(ret.Value);
now = now.AddDays(-7);
}
var mean = rets.Average();
var variance = rets.Select(r => Math.Pow(r - mean, 2)).Sum();
var weeklyStdDev = Math.Sqrt(variance / rets.Count);
return weeklyStdDev * Math.Sqrt(40);
}
internal double? MAV200() {
return _adjDictionary.ToList()
.OrderByDescending(k => k.Key)
.Take(200).Average(k => k.Value);
}
internal double TodayPrice() {
var price = 0.0;
var shift = 0.0;
GetPrice(DateTime.Now, out price, out shift);
return price;
}
internal Summary GetSummary() {
return new Summary(Ticker, _name, _assetClass, _assetSubClass,
LastWeekReturn(), Last4WeeksReturn(), Last3MonthsReturn(),
Last6MonthsReturn(), LastYearReturn(), StdDev(), TodayPrice(),
MAV200());
}
}

class Program {

static string CreateUrl(string ticker, DateTime start, DateTime end)
{
return @"http://ichart.finance.yahoo.com/table.csv?s=" + ticker +
"&a="+(start.Month - 1).ToString()+"&b="+start.Day.ToString()+"&c="+start.Year.ToString() +
"&d="+(end.Month - 1).ToString()+"&e="+end.Day.ToString()+"&f="+end.Year.ToString() +
"&g=d&ignore=.csv";
}

static void Main(string[] args) {
// If you rise this above 5 you tend to get frequent connection closing on my machine
// I'm not sure if it is msft network or yahoo web service
ServicePointManager.DefaultConnectionLimit = 10;

var tickers =
File.ReadAllLines("ETFTest.csv")
.Skip(1)
.Select(l => l.Split(new[] { ',' }))
.Where(v => v[2] != "Leveraged")
.Select(values => Tuple.Create(values[0], values[1], values[2], values[3]))
.ToArray();

var len = tickers.Length;

var start = DateTime.Now.AddYears(-2);
var end = DateTime.Now;
var cevent = new CountdownEvent(len);
var summaries = new Summary[len];

for(var i = 0; i < len; i++) {
var t = tickers[i];
var url = CreateUrl(t.Item1, start, end);
using (var webClient = new WebClient()) {
webClient.DownloadStringCompleted +=
new DownloadStringCompletedEventHandler(downloadStringCompleted);
webClient.DownloadStringAsync(new Uri(url), Tuple.Create(t, cevent, summaries, i));
}
}

cevent.Wait();
Console.WriteLine("\n");

var top15perc =
summaries
.Where(s => s.LRS.HasValue)
.OrderByDescending(s => s.LRS)
.Take((int)(len * 0.15));
var bottom15perc =
summaries
.Where(s => s.LRS.HasValue)
.OrderBy(s => s.LRS)
.Take((int)(len * 0.15));

Console.WriteLine();
Summary.Banner();
Console.WriteLine("TOP 15%");
foreach(var s in top15perc)
s.Print();

Console.WriteLine();
Console.WriteLine("Bottom 15%");
foreach (var s in bottom15perc)
s.Print();

}

static void downloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) {
var bigTuple = (Tuple<Tuple<string, string, string, string>, CountdownEvent, Summary[], int>)e.UserState;
var tuple = bigTuple.Item1;
var cevent = bigTuple.Item2;
var summaries = bigTuple.Item3;
var i = bigTuple.Item4;
var ticker = tuple.Item1;
var name = tuple.Item2;
var asset = tuple.Item3;
var subAsset = tuple.Item4;

if (e.Error == null) {
var adjustedPrices =
e.Result
.Split(new[] { '\n' })
.Skip(1)
.Select(l => l.Split(new[] { ',' }))
.Where(l => l.Length == 7)
.Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));

var timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);
summaries[i] = timeSeries.GetSummary();
cevent.Signal();
Console.Write("{0} ", ticker);
} else {
Console.WriteLine("[{0} ERROR] ", ticker);
summaries[i] = new Summary(ticker,name,"ERROR","ERROR",0,0,0,0,0,0,0,0);
cevent.Signal();
}
}
}
}

                                                                                
package etf.analyzer

import scala.io.Source
import org.scala_tools.time.Imports._
import org.scala_tools.option.math.Imports._
import org.joda.time.Days
import org.scala_tools.using.Using
import org.scala_tools.web.WebClient
import org.scala_tools.web.WebClientConnections
import org.scala_tools.web.DownloadStringCompletedEventArgs
import java.io.File
import java.util.concurrent.CountDownLatch

case class Event (date : DateTime, price : Double) {}


case class Summary (
ticker : String, name : String, assetClass : String,
assetSubClass : String, weekly : Option[Double],
fourWeeks : Option[Double], threeMonths : Option[Double],
sixMonths : Option[Double], oneYear : Option[Double],
stdDev : Double, price : Double, mav200 : Double
) {



// Abracadabra ...
val LRS = (fourWeeks + threeMonths + sixMonths+ oneYear) / 4



















def banner() = {
printf("%-6s", "Ticker")
printf("%-50s", "Name")
printf("%-12s", "Asset Class")
printf("%4s", "RS")
printf("%4s", "1Wk")
printf("%4s", "4Wk")
printf("%4s", "3Ms")
printf("%4s", "6Ms")
printf("%4s", "1Yr")
printf("%6s", "Vol")
printf("%2s\n", "Mv")
}

def print() = {

printf("%-6s", ticker);
printf("%-50s", new String(name.toArray.take(48)))
printf("%-12s", new String(assetClass.toArray.take(10)));
printf("%4.0f", LRS * 100 getOrElse null)
printf("%4.0f", weekly * 100 getOrElse null)
printf("%4.0f", fourWeeks * 100 getOrElse null)
printf("%4.0f", threeMonths * 100 getOrElse null)
printf("%4.0f", sixMonths * 100 getOrElse null)
printf("%4.0f", oneYear * 100 getOrElse null)
printf("%6.0f", stdDev * 100);
if (price <= mav200) {
printf("%2s\n", "X");
} else {
println();
}
}
}

case class TimeSeries (
ticker : String, name : String, assetClass : String,
assetSubClass : String, private events : Iterable[Event]
) {

private val _adjDictionary : Map[DateTime, Double]
= Map() ++ events.map(e => (e.date -> e.price))
private val _start = events.last.date




// Add the sum and average function to all Iterables[Double] used locally
private implicit def iterableWithSumAndAverage(c: Iterable[Double]) = new {
def sum = c.foldLeft(0.0)(_ + _)
def average = sum / c.size
}

def getPrice(whenp : DateTime) : Option[(Double,Int)] = {
var when = new DateTime(whenp.year.get,whenp.month.get,whenp.day.get,0,0,0,0)
var found = false
var shift = 1
var aPrice = 0.0
while (when >= _start && !found) {
if (_adjDictionary.contains(when)) {
aPrice = _adjDictionary(when)
found = true
}
when = when - 1.days
shift -= 1
}
// Either return the price and the shift or None if no price was found
if (found) Some(aPrice,shift) else return None
}

def getReturn(start: DateTime, end: DateTime) : Option[Double] = {
for {
(endPrice,daysBefore) <- -="" .takewhile="" 1.0="" 1.years="" 28.days="" 3.months="" 3.years="" 6.months="" 7.days="" d="" dates="Iterator.iterate(today)(_" datetime.now="" daysbefore.days="" def="" double="{" end="" endprice="" getprice="" last3monthsreturn="getReturn(DateTime.now" last4weeksreturn="getReturn(DateTime.now" last6monthsreturn="getReturn(DateTime.now" lastweekreturn="getReturn(DateTime.now" lastyearreturn="getReturn(DateTime.now" limit="today" private="" start="" startprice="" stddev="" today="DateTime.now" val="" yield=""> d >= (_start + 12.days) && d >= limit)
.toList
val rets = dates.map(d => getReturn(d - 7.days, d).get)
val mean = rets.average
val variance = rets.map(r => Math.pow(r - mean, 2)).average
val weeklyStdDev = Math.sqrt(variance);
return weeklyStdDev * Math.sqrt(40);
}



def mav200(): Double = {
return _adjDictionary.toList
.sortWith((elem1, elem2) => elem1._1 >= elem2._1)
.take(200).map(keyValue => keyValue._2).average
}
def todayPrice() : Double = {
getPrice(DateTime.now) match {
case None => 0.0
case Some((price,_)) => price
}
}
def getSummary() =
Summary(ticker, name, assetClass, assetSubClass,
lastWeekReturn, last4WeeksReturn, last3MonthsReturn,
last6MonthsReturn, lastYearReturn, stdDev, todayPrice,
mav200)
}

object Program extends Using {

def createUrl(ticker: String, start: DateTime, end: DateTime) : String = {
return """http://ichart.finance.yahoo.com/table.csv?s=""" + ticker +
"&a="+(start.month.get-1)+ "&b=" + start.day.get + "&c=" + start.year.get +
"&d="+(end.month.get -1)+ "&e=" + end.day.get + "&f=" + end.year.get +
"&g=d&ignore=.csv"
}



def main(args : Array[String]) : Unit = {


val tickers =
Source.fromFile(new File("ETFTest.csv")).getLines()
.drop(1)
.map(l => l.trim.split(','))
.filter(v => v(2) != "Leveraged")
.map(values => (values(0),values(1),values(2),if (values.length==4) values(3) else ""))
.toSeq.toArray

val len = tickers.length;

val start = DateTime.now - 2.years
val end = DateTime.now
val cevent = new CountDownLatch(len)
val summaries = new Array[Summary](len)

using(new WebClientConnections(connectionsPerAddress = 10, threadPool=10)) {
webClientConnections =>
for (i <- .filter="" 0="" cevent.await="" cevent="" downloadstringcompleted="" end="" i="" len="" println="" s="" start="" summaries="" t="" top15perc="summaries" until="" url="" val="" webclient.downloadstringasync="" webclient.downloadstringcompleted="" webclient="webClientConnections.getWebClient"> s.LRS.isDefined)
.sortWith((elem1, elem2) => elem1.LRS >= elem2.LRS)
.take((len * 0.15).toInt)
val bottom15perc =
summaries
.filter(s => s.LRS.isDefined)
.sortWith((elem1, elem2) => elem1.LRS <= elem2.LRS)
.take((len * 0.15).toInt)

println
summaries(0).banner()
println("TOP 15%")
for (s <- .drop="" .map="" .split="" 15="" :="" adjustedprices="e.result" array="" asset="" bigtuple="" bottom15perc="" cevent="" countdownlatch="" datetimeformat.forpattern="" def="" downloadstringcompleted="" downloadstringcompletedeventargs="" e.error="=" e="" for="" i="" if="" int="" l="" n="" name="" null="" ottom="" parse="" parsedatetime="" println="" s.print="" s="" string="" subasset="" summaries="" ticker="" top15perc="" ummary="" val="" yyyy-mm-dd=""> l.split(','))
.filter(l => l.length == 7)
.map(v => Event(parse(v(0)),v(6).toDouble))

val timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);
summaries(i) = timeSeries.getSummary();
cevent.countDown()
printf("%s ", ticker)
} else {
printf("[%s ERROR] \n", ticker)
summaries(i) = Summary(ticker,name,"ERROR","ERROR",Some(0),Some(0),Some(0),Some(0),Some(0),0,0,0)
cevent.countDown()
}
}
}


Notes

TimeSeries getPrice method: Scala does not have output parameters on methods. It doesn't need them because the return type from a method can be a tuple and you can return as many values as you like. Also the method shown copies the C# style closely using loop variables. Another way of writing the same method in Scala making use of list functions is:
def getPrice(when : DateTime) : Option[(Double,Int)] =  {
// Find the most recent day with a price starting from when, but don't go back past _start
val latestDayWithPrice
= Iterator.iterate(when)(_ - 1.days)
.dropWhile (d=> !_adjDictionary.contains(d) && d >= _start )
.next
if (_adjDictionary.contains(latestDayWithPrice)) {
val shift = Days.daysBetween(when,latestDayWithPrice).getDays()
val aPrice = _adjDictionary(latestDayWithPrice)
Some((aPrice,shift))
} else {
None
}
}

TimeSeries getReturn method: The 2 calls to getPrice() return an Option[a price, days offset].
Dealing with Option[...] in a for expression is an easy way of dealing with the possibilty of either Option[...] being None. If either getPrice() call returns None, then the yield will return a None as well. Another perhaps simpler to understand getReturn implementation is:
def getReturn(start: DateTime, end: DateTime) : Option[Double] = {
var endPriceDetails = getPrice(end)
if (endPriceDetails == None) return None
val (endPrice,daysBefore) = endPriceDetails.getOrElse(null)
val startPriceDetails = getPrice(start + daysBefore.days)
if (startPriceDetails == None) return None
val (startPrice,_) = startPriceDetails.getOrElse(null)
(endPrice / startPrice - 1.0)
}

TimeSeries mav200 method: The scala version is slightly harder work than .net 3.5 LINQ OrderByDescending method with key selector syntax: .OrderByDescending(k => k.Key). The Scala version has to say *how* to do it. The LINQ version says *what* is required. The same is true for the C# use of the average function, which uses a field selector.

I'm not happy that I know whether the concurrent access to summaries array access in downloadStringCompleted is safe. It seems to work but I don't know if it is genuinely thread safe. I've just copied the C# code, which may have built-in thread safe array access.


Some of the features of Scala that are shown here
  • Easy Java library interop. See use of CountDownLatch, Days, File.
  • Good old fashioned casting if you really need it. See asInstanceOf.
  • No semicolons
  • No need to use () for a function declaration with no parameters or a call to ut. See TimeSeries.getSummary(). (brackets recommended if there are side effects)
  • Type declarations are unnecessary except in method parameters, but can be declared explicityly if it aids readability. See _adjDictionary.
  • Named and default parameters. See WebClientConnections
  • Much less boilerplate with "case" classes providing automatic constructors, fields, toString, equals, hashcode. See Event, Summary, TimeSeries
  • Joda time wrapper so you can say "today - 3.years"
  • Pattern matching assignment "val ((ticker,name),cevent,summaries,i) = bigTuple" in downloadStringCompleted
  • "using" block for automatic resource closing. In the main method, the "using(new WebClientConnections(" block will close down the WebClientConnections thread pool at the end of the block. This is very similar to the C# "using" code.
  • local "implicit" function definitions allowing you to effectively add methods to existing classes in a tightly controlled and scoped way. (see def iterableWithSumAndAverage)
  • Pattern matching, switch on steroids. See todayPrice().
  • Use of powerful list manipulation functions, such as Iterator.iterate, takeWhile to replace traditional state based loops. See iterate/dropWhile examples in stdDev() and in main(): drop, map, filter, sortWith, take. See the infamous foldLeft example at work in the sum function.

by Tim Azzopardi (noreply@blogger.com) at April 04, 2014 01:18 AM

April 02, 2014

Functional Jobs

Server Game Developer at Quark Games (Full-time)

Quark Games was established in 2008 with the mission to create hardcore games for the mobile and tablet platforms. By focusing on making high quality, innovative, and engaging games, we aim to redefine mobile and tablet gaming as it exists today.

We seek to gather a group of individuals who are ambitious but humble professionals who are relentless in their pursuit of learning and sharing knowledge. We're looking for people who share our passion for games, aren’t afraid to try new and different things, and inspire and push each other to personal and professional success.

As a Server Game Developer, you’ll be responsible for implementing server related game features. You’ll be working closely with the server team to create scalable infrastructure as well as the client team for feature integration. You’ll have to break out of your toolset to push boundaries on technology to deliver the most robust back end to our users.

What you’ll do every day

  • Develop and maintain features and systems necessary for the game
  • Collaborate with team members to create and manage scalable architecture
  • Work closely with Client developers
    on feature integration
  • Solve real time problems at a large
    scale
  • Evaluate new technologies and products

What you can bring to the role

  • Ability to get stuff done
  • Desire to learn new technologies and design patterns
  • Care about creating readable, reusable, well documented, and clean code
  • Passion for designing and building systems to scale
  • Excitement for building and playing games

Bonus points for

  • Experience with a functional language (Erlang, Elixir, Haskell, Scala, Julia, Rust, etc..)
  • Experience with a concurrent language (Erlang, Elixir, Clojure, Go, Scala, etc..)
  • Being a polyglot programmer and having experience with a wide range of languages (Ruby, C#, and Objective-C)
  • Experience with database integration and management for NoSQL systems (Riak, Couchbase, Redis, etc...)
  • Experience with server operations, deployment, and with tools such as Chef or Puppet
  • Experience with system administration

Get information on how to apply for this position.

April 02, 2014 11:36 PM

April 01, 2014

Francois Armand

Where all the activity went?

As you can see, the activity on that blog has been non existant for several years now. For the two visitor wondering, my main focus switch to my familly (2 sons, one other on its way), Normation (my company about devops, config management, etc: http://www.normation.com/ ) and of course Rudder (http://www.rudder-project.org/).

I'm still doing a ton of Scala, and you can find some articles on our company blog (http://blog.normation.com/) or slides about presentation I gave, like the one on Scala + ZeroMQ for Scala.IO 2013. It's on slideshoare: http://fr.slideshare.net/normation/

And of course, there is my Github page: https://github.com/fanf/ or twitter: https://twitter.com/fanf42

Hope to see you on these other media!

by Fanf (noreply@blogger.com) at April 01, 2014 01:02 PM

Quoi qu'il en soit

Becoming really rich with Java 8

Disclaimer: the C#, Scala and Java 8 algorithms shown and referenced here implement a "momentum investing" algorithm. This is purely for computer language comparison purposes and should definitely not be taken as investment advice.

In 2009, I saw this post Becoming really rich with C# showcasing the new features in C#  4.5 and was impressed with C# with its hybrid Object-Functional approach and collection APIs to give collection operations a SQL-like feel:

var adjustedPrices =
    e.Result
    .Split(new[] { '\n' })
    .Skip(1)
    .Select(l => l.Split(new[] { ',' }))
    .Where(l => l.Length == 7)
    .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));

Now lets do that in 7 lines of code in Java 5, 6, or 7. Er no, sorry.

At the time, I was learning Scala. So I translated Becoming really rich with C# into Scala and compared them side by side. Result:  See http://quoiquilensoit.blogspot.com/2009/10/becoming-really-rich-with-scala.html The result surprised me. I thought C# held up pretty well overall.

So, a full four years later, Oracle owns Java and Java8 is out with some of the same features that C# was offering in dot net 4.5 in 2010. There is obvious missing stuff that Java 8 still does not have: LINQ,  Output parameters. Vars. Tuples. Optional/Nullable numerics. But I tried the same exercise, trying to keep in the spirit of the C# code.

The code is on github: https://github.com/azzoti/get-rich-with-java8

git clone https://github.com/azzoti/get-rich-with-java8.git

Its an eclipse maven project, but you can run straight from the command line with:

mvn exec:java

(Make sure you have JDK 8 set up!)




Original C#Java 8
See notes after the table

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
using System.IO;

namespace ETFAnalyzer {






















struct Event {
internal Event(DateTime date, double price) { Date = date; Price = price; }
internal readonly DateTime Date;
internal readonly double Price;
}










class Summary {
internal Summary(string ticker, string name, string assetClass,
string assetSubClass, double? weekly, double? fourWeeks,
double? threeMonths, double? sixMonths, double? oneYear,
double? stdDev, double price, double? mav200) {
Ticker = ticker;
Name = name;
AssetClass = assetClass;
AssetSubClass = assetSubClass;
// Abracadabra ...
LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4;
Weekly = weekly;
FourWeeks = fourWeeks;
ThreeMonths = threeMonths;
SixMonths = sixMonths;
OneYear = oneYear;
StdDev = stdDev;
Mav200 = mav200;
Price = price;
}
internal readonly string Ticker;
internal readonly string Name;
internal readonly string AssetClass;
internal readonly string AssetSubClass;
internal readonly double? LRS;
internal readonly double? Weekly;
internal readonly double? FourWeeks;
internal readonly double? ThreeMonths;
internal readonly double? SixMonths;
internal readonly double? OneYear;
internal readonly double? StdDev;
internal readonly double? Mav200;
internal double Price;

internal static void Banner() {
Console.Write("{0,-6}", "Ticker");
Console.Write("{0,-50}", "Name");
Console.Write("{0,-12}", "Asset Class");
Console.Write("{0,4}", "RS");
Console.Write("{0,4}", "1Wk");
Console.Write("{0,4}", "4Wk");
Console.Write("{0,4}", "3Ms");
Console.Write("{0,4}", "6Ms");
Console.Write("{0,4}", "1Yr");
Console.Write("{0,6}", "Vol");
Console.WriteLine("{0,2}", "Mv");
}

internal void Print() {

Console.Write("{0,-6}", Ticker);
Console.Write("{0,-50}", new String(Name.Take(48).ToArray()));
Console.Write("{0,-12}", new String(AssetClass.Take(10).ToArray()));
Console.Write("{0,4:N0}", LRS * 100);
Console.Write("{0,4:N0}", Weekly * 100);
Console.Write("{0,4:N0}", FourWeeks * 100);
Console.Write("{0,4:N0}", ThreeMonths * 100);
Console.Write("{0,4:N0}", SixMonths * 100);
Console.Write("{0,4:N0}", OneYear * 100);
Console.Write("{0,6:N0}", StdDev * 100);
if (Price <= Mav200)
Console.WriteLine("{0,2}", "X");
else
Console.WriteLine();
}
}

class TimeSeries {
internal readonly string Ticker;
readonly DateTime _start;
readonly Dictionary<DateTime, double> _adjDictionary;
readonly string _name;
readonly string _assetClass;
readonly string _assetSubClass;

internal TimeSeries(string ticker, string name, string assetClass, string assetSubClass, IEnumerable<event> events) {
Ticker = ticker;
_name = name;
_assetClass = assetClass;
_assetSubClass = assetSubClass;
_start = events.Last().Date;
_adjDictionary = events.ToDictionary(e => e.Date, e => e.Price);
}










bool GetPrice(DateTime when, out double price, out double shift) {
// To nullify the effect of hours/min/sec/millisec being different from 0
when = new DateTime(when.Year, when.Month, when.Day);
var found = false;
shift = 1;
double aPrice = 0;
while (when >= _start && !found) {
if (_adjDictionary.TryGetValue(when, out aPrice)) {
found = true;
}
when = when.AddDays(-1);
shift -= 1;
}
price = aPrice;
return found;
}

double? GetReturn(DateTime start, DateTime end) {
var startPrice = 0.0;
var endPrice = 0.0;
var shift = 0.0;
var foundEnd = GetPrice(end, out endPrice, out shift);
var foundStart = GetPrice(start.AddDays(shift), out startPrice, out shift);
if (!foundStart || !foundEnd)
return null;
else
return endPrice / startPrice - 1;
}

internal double? LastWeekReturn() {
return GetReturn(DateTime.Now.AddDays(-7), DateTime.Now);
}
internal double? Last4WeeksReturn() {
return GetReturn(DateTime.Now.AddDays(-28), DateTime.Now);
}
internal double? Last3MonthsReturn() {
return GetReturn(DateTime.Now.AddMonths(-3), DateTime.Now);
}
internal double? Last6MonthsReturn() {
return GetReturn(DateTime.Now.AddMonths(-6), DateTime.Now);
}
internal double? LastYearReturn() {
return GetReturn(DateTime.Now.AddYears(-1), DateTime.Now);
}






internal double? StdDev() {
var now = DateTime.Now;
now = new DateTime(now.Year, now.Month, now.Day);
var limit = now.AddYears(-3);
var rets = new List<double>();
while (now >= _start.AddDays(12) && now >= limit) {
var ret = GetReturn(now.AddDays(-7), now);
rets.Add(ret.Value);
now = now.AddDays(-7);
}
var mean = rets.Average();
var variance = rets.Select(r => Math.Pow(r - mean, 2)).Sum();
var weeklyStdDev = Math.Sqrt(variance / rets.Count);
return weeklyStdDev * Math.Sqrt(40);
}
internal double? MAV200() {
return _adjDictionary.ToList()
.OrderByDescending(k => k.Key)
.Take(200).Average(k => k.Value);
}
internal double TodayPrice() {
var price = 0.0;
var shift = 0.0;
GetPrice(DateTime.Now, out price, out shift);
return price;
}
internal Summary GetSummary() {
return new Summary(Ticker, _name, _assetClass, _assetSubClass,
LastWeekReturn(), Last4WeeksReturn(), Last3MonthsReturn(),
Last6MonthsReturn(), LastYearReturn(), StdDev(), TodayPrice(),
MAV200());
}
}

class Program {

static string CreateUrl(string ticker, DateTime start, DateTime end)
{
return @"http://ichart.finance.yahoo.com/table.csv?s=" + ticker +
"&a="+(start.Month - 1).ToString()+"&b="+start.Day.ToString()+"&c="+start.Year.ToString() +
"&d="+(end.Month - 1).ToString()+"&e="+end.Day.ToString()+"&f="+end.Year.ToString() +
"&g=d&ignore=.csv";
}

static void Main(string[] args) {
// If you rise this above 5 you tend to get frequent connection closing on my machine
// I'm not sure if it is msft network or yahoo web service
ServicePointManager.DefaultConnectionLimit = 10;

var tickers =
File.ReadAllLines("ETFTest.csv")
.Skip(1)
.Select(l => l.Split(new[] { ',' }))
.Where(v => v[2] != "Leveraged")
.Select(values => Tuple.Create(values[0], values[1], values[2], values[3]))
.ToArray();

var len = tickers.Length;

var start = DateTime.Now.AddYears(-2);
var end = DateTime.Now;
var cevent = new CountdownEvent(len);
var summaries = new Summary[len];

for(var i = 0; i < len; i++) {
var t = tickers[i];
var url = CreateUrl(t.Item1, start, end);
using (var webClient = new WebClient()) {
webClient.DownloadStringCompleted +=
new DownloadStringCompletedEventHandler(downloadStringCompleted);
webClient.DownloadStringAsync(new Uri(url), Tuple.Create(t, cevent, summaries, i));
}
}

cevent.Wait();
Console.WriteLine("\n");

var top15perc =
summaries
.Where(s => s.LRS.HasValue)
.OrderByDescending(s => s.LRS)
.Take((int)(len * 0.15));
var bottom15perc =
summaries
.Where(s => s.LRS.HasValue)
.OrderBy(s => s.LRS)
.Take((int)(len * 0.15));

Console.WriteLine();
Summary.Banner();
Console.WriteLine("TOP 15%");
foreach(var s in top15perc)
s.Print();

Console.WriteLine();
Console.WriteLine("Bottom 15%");
foreach (var s in bottom15perc)
s.Print();

}

static void downloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) {
var bigTuple = (Tuple<Tuple<string, string, string, string>, CountdownEvent, Summary[], int>)e.UserState;
var tuple = bigTuple.Item1;
var cevent = bigTuple.Item2;
var summaries = bigTuple.Item3;
var i = bigTuple.Item4;
var ticker = tuple.Item1;
var name = tuple.Item2;
var asset = tuple.Item3;
var subAsset = tuple.Item4;

if (e.Error == null) {
var adjustedPrices =
e.Result
.Split(new[] { '\n' })
.Skip(1)
.Select(l => l.Split(new[] { ',' }))
.Where(l => l.Length == 7)
.Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));

var timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);
summaries[i] = timeSeries.GetSummary();
cevent.Signal();
Console.Write("{0} ", ticker);
} else {
Console.WriteLine("[{0} ERROR] ", ticker);
summaries[i] = new Summary(ticker,name,"ERROR","ERROR",0,0,0,0,0,0,0,0);
cevent.Signal();
}
}
}
}

package etf.analyzer;

import static java.lang.System.out;
import static java.util.Comparator.comparing;
import static java.util.stream.Collectors.*;

import java.io.IOException;
import java.nio.file.*;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import java.util.*;
import java.util.Map.Entry;
import java.util.concurrent.CountDownLatch;
import java.util.stream.Stream;

class Event {
public Event(LocalDate date, double price) {
this.date = date;
this.price = price;
}
public LocalDate getDate() {
return date;
}
public double getPrice() {
return price;
}
private LocalDate date;
private double price;
}
class Summary {
public Summary(String ticker, String name, String assetClass,
String assetSubClass, OptionalDouble weekly, OptionalDouble fourWeeks,
OptionalDouble threeMonths, OptionalDouble sixMonths, OptionalDouble oneYear,
OptionalDouble stdDev, double price, OptionalDouble mav200) {
this.ticker = ticker;
this.name = name;
this.assetClass = assetClass;
// this.assetSubClass = assetSubClass;
// Abracadabra ...
this.lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d));
this.weekly = weekly;
this.fourWeeks = fourWeeks;
this.threeMonths = threeMonths;
this.sixMonths = sixMonths;
this.oneYear = oneYear;
this.stdDev = stdDev;
this.mav200 = mav200;
this.price = price;
}
private String ticker;
private String name;
private String assetClass;
// private String assetSubClass;
public OptionalDouble lrs;
private OptionalDouble weekly;
private OptionalDouble fourWeeks;
private OptionalDouble threeMonths;
private OptionalDouble sixMonths;
private OptionalDouble oneYear;
private OptionalDouble stdDev;
private OptionalDouble mav200;
private double price;

static void banner() {
out.printf("%-6s", "Ticker");
out.printf("%-50s", "Name");
out.printf("%-12s", "Asset Class");
out.printf("%4s", "RS");
out.printf("%4s", "1Wk");
out.printf("%4s", "4Wk");
out.printf("%4s", "3Ms");
out.printf("%4s", "6Ms");
out.printf("%4s", "1Yr");
out.printf("%6s", "Vol");
out.printf("%2s\n", "Mv");
}
void print() {
out.printf("%-6s", ticker);
out.printf("%-50s", name);
out.printf("%-12s", assetClass);
out.printf("%4.0f", lrs.orElse(0.0d) * 100);
out.printf("%4.0f", weekly.orElse(0.0d) * 100);
out.printf("%4.0f", fourWeeks.orElse(0.0d) * 100);
out.printf("%4.0f", threeMonths.orElse(0.0d) * 100);
out.printf("%4.0f", sixMonths.orElse(0.0d) * 100);
out.printf("%4.0f", oneYear.orElse(0.0d) * 100);
out.printf("%6.0f", stdDev.orElse(0.0d) * 100);
if (price <= mav200.orElse(-Double.MAX_VALUE))
out.printf("%2s\n", "X");
else
out.println();
}
}

class TimeSeries {
private String ticker;
private LocalDate _start;
private Map<LocalDate, Double> _adjDictionary;
private String _name;
private String _assetClass;
private String _assetSubClass;

public TimeSeries(String ticker, String name, String assetClass, String assetSubClass, List<Event> events) {
this.ticker = ticker;
this._name = name;
this._assetClass = assetClass;
this._assetSubClass = assetSubClass;
this._adjDictionary = events.stream().collect(toMap(Event::getDate, Event::getPrice));
this._start = events.size() - 1 > 0 ? events.get(events.size() - 1).getDate() : LocalDate.now().minusYears(99);
}

private static final class FindPriceAndShift {
public FindPriceAndShift(boolean found, double aPrice, int shift) {
this.found = found;
this.price = aPrice;
this.shift = shift;
}
private boolean found;
private double price;
private int shift;
}

private FindPriceAndShift getPrice(LocalDate when) {
boolean found = false;
int shift = 1;
double aPrice = 0.0d;
while ((when.equals(_start) || when.isAfter(_start)) && !found) {
if (found = _adjDictionary.containsKey(when)) {
aPrice = _adjDictionary.get(when);
}
when = when.minusDays(1);
shift -= 1;
}
return new FindPriceAndShift(found, aPrice, shift);
}

OptionalDouble getReturn(LocalDate start, LocalDate endDate) {
FindPriceAndShift foundEnd = getPrice(endDate);
FindPriceAndShift foundStart = getPrice(start.plusDays(foundEnd.shift));
if (!foundStart.found || !foundEnd.found)
return OptionalDouble.empty();
else {
return OptionalDouble.of(foundEnd.price / foundStart.price - 1.0d);
}
}

private OptionalDouble lastWeekReturn() {
return getReturn(LocalDate.now().minusDays(7), LocalDate.now());
}
private OptionalDouble last4WeeksReturn() {
return getReturn(LocalDate.now().minusDays(28), LocalDate.now());
}
private OptionalDouble last3MonthsReturn() {
return getReturn(LocalDate.now().minusMonths(3), LocalDate.now());
}
private OptionalDouble last6MonthsReturn() {
return getReturn(LocalDate.now().minusMonths(6), LocalDate.now());
}
private OptionalDouble lastYearReturn() {
return getReturn(LocalDate.now().minusYears(1), LocalDate.now());
}
private Double sum(Collection<Double> d) {
return d.parallelStream().reduce(0d, Double::sum);
}
private Double avg(Collection<Double> d) {
return sum(d) / d.size();
}
private OptionalDouble stdDev() {
LocalDate now = LocalDate.now();
LocalDate limit = now.minusYears(3);
List<Double> rets = new ArrayList<>();
while (now.compareTo(_start.plusDays(12)) >= 0 && now.compareTo(limit) >= 0) {
OptionalDouble ret = getReturn(now.minusDays(7), now);
rets.add(ret.orElse(0d));
now = now.minusDays(7);
}
Double mean = avg(rets);
Double variance = avg(rets.parallelStream().map(r -> Math.pow(r - mean, 2)).collect(toList()));
Double weeklyStdDev = Math.sqrt(variance);
return OptionalDouble.of(weeklyStdDev * Math.sqrt(40));
}
private OptionalDouble MAV200() {
return OptionalDouble.of(
_adjDictionary.entrySet().parallelStream()
.sorted(comparing((Entry<LocalDate,Double> p) -> p.getKey()).reversed())
.limit(200).mapToDouble(e -> e.getValue()).average().orElse(0d)
);
}
private double todayPrice() {
return getPrice(LocalDate.now()).price;
}
public Summary getSummary() {
return new Summary(ticker, _name, _assetClass, _assetSubClass,
lastWeekReturn(), last4WeeksReturn(), last3MonthsReturn(),
last6MonthsReturn(), lastYearReturn(), stdDev(), todayPrice(),
MAV200());
}
}

public class Program {

static String createUrl(String ticker, LocalDate start, LocalDate end) {
return "http://ichart.finance.yahoo.com/table.csv?s=" + ticker + "&a="
+ (start.getMonthValue() - 1) + "&b=" + start.getDayOfMonth()
+ "&c=" + start.getYear() + "&d=" + (end.getMonthValue() - 1)
+ "&e=" + end.getDayOfMonth() + "&f=" + end.getYear()
+ "&g=d&ignore=.csv";
}

public static void main(String[] args) throws IOException, InterruptedException {

List<String[]> tickers = Files.lines(FileSystems.getDefault().getPath("ETFs.csv"))
.skip(1)
.parallel()
.map(line -> line.split(",", 4))
.filter(v -> !v[2].equals("Leveraged"))
.collect(toList());

int len = tickers.size();

LocalDate start = LocalDate.now().minusYears(2);
LocalDate end = LocalDate.now();
CountDownLatch cevent = new CountDownLatch(len);
Summary[] summaries = new Summary[len];

try (WebClient webClient = new WebClient()) {
for (int i = 0; i < len; i++) {
String[] t = tickers.get(i);
final int index = i;
webClient.downloadStringAsync(createUrl(t[0], start, end), result -> {
summaries[index] = downloadStringCompleted(t[0], t[1], t[2], t[3], result);
cevent.countDown();
});
}
cevent.await();
}

Stream<Summary> top15perc =
Arrays.stream(summaries)
.filter(s -> s.lrs.isPresent())
.sorted(comparing((Summary p) -> p.lrs.get()).reversed())
.limit((int)(len * 0.15));
Stream<Summary> bottom15perc =
Arrays.stream(summaries)
.filter(s -> s.lrs.isPresent())
.sorted(comparing((Summary p) -> p.lrs.get()))
.limit((int)(len * 0.15));

System.out.println();
Summary.banner();
System.out.println("TOP 15%");
top15perc.forEach(
s -> s.print());

System.out.println();
Summary.banner();
System.out.println("BOTTOM 15%");
bottom15perc.forEach(
s -> s.print());

}

public static Summary downloadStringCompleted(String ticker, String name, String asset, String subAsset,
DownloadStringAsyncCompletedArgs e
) {
Summary summary;
if (e.getError() == null) {
List<Event> adjustedPrices =
Arrays.stream(e.getResult().split("\n"))
.skip(1)
.parallel()
.map(line -> line.split(",", 7))
.filter(l -> l.length == 7)
.map(v -> new Event(LocalDate.parse(v[0], DateTimeFormatter.ISO_LOCAL_DATE), Double.valueOf(v[6]))).collect(toList());
TimeSeries timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);
summary = timeSeries.getSummary();
} else {
System.err.printf("[%s ERROR]", ticker);
final OptionalDouble zero = OptionalDouble.of(0d);
summary = new Summary(ticker, name, "ERROR", "ERROR", zero, zero, zero, zero, zero, zero, 0d, zero);
}
return summary;
}
}

Some observations:
  • The code depends on the yahoo to get historical stock prices and sometimes Yahoo is not available for stock prices. Wait five minutes and run the program again. 
  • The Java code is much much faster than the C# code, but it is going to yahoo to get historical stock prices which is going to be the limiting factor.  I don't think the C# should be slower than the Java code but it is and I'm not sure why it is. I'm pretty sure the poor C# performance is to do with the dot net WebClient configuration but I might be wrong.
  • In Java 8, just to show how easy it is, I've used parallelStream() and .parallel() in a couple of places, but these can be removed for the equivalent functionality. I can see no noticeable difference in performance with or without these calls when using an 8 core machine. As I said above I believe that the limiting factor is going to yahoo to get historical stock prices. There is not that much number crunching to do and I suspect the time taken to do it pales into insignificance next to the internet fetch time. Doing the calculations in parallel just isn't worth it. But its good to see how easy it is to parallelize work if you want to. Being able to simply say Collection.parallelStream() and Stream.parallel() is incredible if you find a sensible use case for it.
      • The Java 8 code is a little longer than the C# code. In Java 7, I'm guessing the code would be at least two times longer and very very ugly if written in a similar style. The Java8 code is not as concise as C# or Scala but at least its in the same ball park. Partly this is due to Java POJO boilerplate (e.g. the FindPriceAndShift class and the Event class getter and setters) but thats is no big deal (IMO). The Java code is also more verbose because types must be declared unlike in C# where you can use "var" instead of a type declaration and usually the C# compiler infers what you mean. 
      • Tuples. C# has Tuples, Scala has Tuples but apparently their use is the spawn of satan and civilization will collapse if they are used in Java even to hold temporary results when parsing comma separated values into another class. (Oracle will be removing HashMap from Java9 apparently for similar reasons ;)) In order not to be arrested by the Java thought police I avoided succumbing to this. The C# code uses them, but I've managed to avoid them.
      • Output parameters.  In my scala translation in 2009, my translation to Scala used a return tuple instead of the C# output parameters (which I personally found confusing in the C# algorithm). In the Java 8 version I used a POJO FindPriceAndShift rather than sell my soul to wicked tuple monster.
      • The C# code uses the "double?" type which is a double that can have an empty value and it means you can write LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4 and any of fourWeeksthreeMonthssixMonths, and oneYear can be empty without causing a null pointer exception etc.  Java 8 does ship with OptionalDouble. But, strangely, you can't say a.add(b).add(c).divide(d). So I wrote an OptionalDouble class which does do this, so you can say lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d). If you look at the code you can see its almost trivially simple. Writing lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d) is  not very pretty compared to the C# or Scala equivalent but a lot of Java people are used to doing this method chaining with BigDecimal: but with OptionalDouble now it can be null/emptyValue safe. (The same thing can easily be done to create a an OptionalBigDecimal class obviously.) (And this OptionalDouble stuff could easily have been done in Java7 too.)
      • Java does not have a C# style WebClient, so I have taken the open source jetty http client and wrapped it in a simple wrapper to make it look like the C# WebClient. See git hub for the WbClient class.
      • Java lives on open source. If the C# code is slow because the dot net WebClient is doing something stupid, its hard to find out as its closed source. If the Jetty's Java http client is  broken, you can debug the source or switch to apache's http client: the best open source libraries emerge through natural selection. [Update: reaction from Reddit (I love reddit!): Sorry, that is pure bullshit. It is perfectly feasible to debug .Net Framework source code:
        http://msdn.microsoft.com/en-us/library/cc667410.aspx And no, it doesn't have a bug. They've been working on that for generations, and Microsoft puts serious money and has serious people working on stuff, as opposed to a bunch of unknown random hippie weed smokers financed by random coin slot donations. and even if java was faster it doesn't change the fact that it is a useless dinosaur which gets improvements 10 years after the rest of the mainstream languages. All that crappy bloated unmaintainable event-based async code can be converted to a beautiful sequence of async / await in C# 5.0, whereas you will probably not see anything like that in java in the next 20 years due to it's complete lack of evolution and retardedness.]
      • There is some surprising missing functionality from the Stream and or Collections. There is no Zip or takeWhile or dropWhile for sequential streams. I'm guessing Java9, guava and others will fill this gap pretty fast.
        • When I showed the code below to an experienced colleague who has only used Java <= 6 he said "that looks like C++ to me: thats completely unmaintainable". Sigh.
          • Stream<Summary> top15perc =
            Arrays.stream(summaries)
            .filter(s -> s.lrs.isPresent())
            .sorted(comparing((Summary p) -> p.lrs.get()).reversed())
            .limit((int)(len * 0.15));



          by Tim Azzopardi (noreply@blogger.com) at April 01, 2014 09:25 AM

          March 31, 2014

          Ruminations of a Programmer

          Functional Patterns in Domain Modeling - The Specification Pattern

          When you model a domain, you model its entities and behaviors. As Eric Evans mentions in his book Domain Driven Design, the focus is on the domain itself. The model that you design and implement must speak the ubiquitous language so that the essence of the domain is not lost in the myriads of incidental complexities that your implementation enforces. While being expressive the model needs to be extensible too. And when we talk about extensibility, one related attribute is compositionality.

          Functions compose more naturally than objects and In this post I will use functional programming idioms to implement one of the patterns that form the core of domain driven design - the Specification pattern, whose most common use case is to implement domain validation. Eric's book on DDD says regarding the Specification pattern ..
          It has multiple uses, but one that conveys the most basic concept is that a SPECIFICATION can test any object to see if it satisfies the specified criteria.
          A specification is defined as a predicate, whereby business rules can be combined by chaining them together using boolean logic. So there's a concept of composition and we can talk about Composite Specification when we talk about this pattern. Various literature on DDD implement this using the Composite design pattern so commonly implemented using class hierarchies and composition. In this post we will use function composition instead.

          Specification - Where ?

          One of the very common confusions that we have when we design a model is where to keep the validation code of an aggregate root or any entity, for that matter.
          • Should we have the validation as part of the entity ? No, it makes the entity bloated. Also validations may vary based on some context, while the core of the entity remains the same.
          • Should we have validations as part of the interface ? May be we consume JSON and build entities out of it. Indeed some validations can belong to the interface and don't hesitate to put them there.
          • But the most interesting validations are those that belong to the domain layer. They are business validations (or specifications), which Eric Evans defines as something that "states a constraint on the state of another object". They are business rules which the entity needs to honor in order to proceed to the next stage of processing.
          We consider the following simple example. We take an Order entity and the model identifies the following domain "specifications" that a new Order must satisfy before being thrown into the processing pipeline:

          1. it must be a valid order obeying the constraints that the domain requires e.g. valid date, valid no of line items etc.
          2. it must be approved by a valid approving authority - only then it proceeds to the next stage of the pipeline
          3. customer status check must be passed to ensure that the customer is not black-listed
          4. the line items from the order must be checked against inventory to see if the order can be fulfilled
          These are the separate steps that are to be done in sequence by the order processing pipeline as pre-order checks before the actual order is ready for fulfilment. A failure in any of them takes the order out of the pipeline and the process stops there. So the model that we will design needs to honor the sequence as well as check all constraints that need to be done as part of every step.

          An important point to note here is that none of the above steps mutate the order - so every specification gets a copy of the original Order object as input, on which it checks some domain rules and determines if it's suitable to be passed to the next step of the pipeline.

          Jumping on to the implementation ..

          Let's take down some implementation notes from what we learnt above ..

          • The Order can be an immutable entity at least for this sequence of operations
          • Every specification needs an order, can we can pull some trick out of our hat which prevents this cluttering of API by passing an Order instance to every specification in the sequence ?
          • Since we plan to use functional programming principles, how can we model the above sequence as an expression so that our final result still remains composable with the next process of order fulfilment (which we will discuss in a future post) ?
          • All these functions look like having similar signatures - we need to make them compose with each other
          Before I present more of any explanation or theory, here are the basic building blocks which will implement the notes that we took after talking to the domain experts ..

          type ValidationStatus[S] = \/[String, S]

          type ReaderTStatus[A, S] = ReaderT[ValidationStatus, A, S]

          object ReaderTStatus extends KleisliInstances with KleisliFunctions {
          def apply[A, S](f: A => ValidationStatus[S]): ReaderTStatus[A, S] = kleisli(f)
          }

          ValidationStatus defines the type that we will return from each of the functions. It's either some status S or an error string that explains what went wrong. It's actually an Either type (right biased) as implemented in scalaz.

          One of the things which we thought will be cool is to avoid repeating the Order parameter for every method when we invoke the sequence. And one of the idioamtic ways of doing it is to use the Reader monad. But here we already have a monad - \/ is a monad. So we need to stack them together using a monad transformer. ReaderT does this job and ReaderTStatus defines the type that somehow makes our life easier by combining the two of them.

          The next step is an implementation of ReaderTStatus, which we do in terms of another abstraction called Kleisli. We will use the scalaz library for this, which implements ReaderT in terms of Kleisli. I will not go into the details of this implementation - in case you are curious, refer to this excellent piece by Eugene.

          So, how does one sample specification look like ?

          Before going into that, here are some basic abstractions, grossly simplified only for illustration purposes ..

          // the base abstraction
          sealed trait Item {
          def itemCode: String
          }

          // sample implementations
          case class ItemA(itemCode: String, desc: Option[String],
          minPurchaseUnit: Int) extends Item
          case class ItemB(itemCode: String, desc: Option[String],
          nutritionInfo: String) extends Item

          case class LineItem(item: Item, quantity: Int)

          case class Customer(custId: String, name: String, category: Int)

          // a skeleton order
          case class Order(orderNo: String, orderDate: Date, customer: Customer,
          lineItems: List[LineItem])

          And here's a specification that checks some of the constraints on the Order object ..

          // a basic validation
          private def validate = ReaderTStatus[Order, Boolean] {order =>
          if (order.lineItems isEmpty) left(s"Validation failed for order $order")
          else right(true)
          }

          It's just for illustration and does not contain much domain rules. The important part is how we use the above defined types to implement the function. Order is not an explicit argument to the function - it's curried. The function returns a ReaderTStatus, which itself is a monad and hence allows us to sequence in the pipeline with other specifications. So we get the requirement of sequencing without breaking out of the expression oriented programming style.

          Here are a few other specifications based on the domain knowledge that we have gathered ..

          private def approve = ReaderTStatus[Order, Boolean] {order =>
          right(true)
          }

          private def checkCustomerStatus(customer: Customer) = ReaderTStatus[Order, Boolean] {order =>
          right(true)
          }

          private def checkInventory = ReaderTStatus[Order, Boolean] {order =>
          right(true)
          }

          Wiring them together

          But how do we wire these pieces together so that we have the sequence of operations that the domain mandates and yet all goodness of compositionality in our model ? It's actually quite easy since we have already done the hard work of defining the appropriate types that compose ..

          Here's the isReadyForFulfilment method that defines the composite specification and invokes all the individual specifications in sequence using for-comprehension, which, as you all know does the monadic bind in Scala and gives us the final expression that needs to be evaluated for the Order supplied.

          def isReadyForFulfilment(order: Order) = {
          val s = for {
          _ <- validate
          _ <- approve
          _ <- checkCustomerStatus(order.customer)
          c <- checkInventory
          } yield c
          s(order)
          }

          So we have the monadic bind implement the sequencing without breaking the compositionality of the abstractions. In the next instalment we will see how this can be composed with the downstream processing of the order that will not only read stuff from the entity but mutate it too, of course in a functional way.

          by Debasish Ghosh (noreply@blogger.com) at March 31, 2014 06:27 AM

          Quoi qu'il en soit

          Becoming really rich with Java 8

          Disclaimer: the C#, Scala and Java 8 algorithms shown and referenced here implement a "momentum investing" algorithm. This is purely for computer language comparison purposes and should definitely not be taken as investment advice.

          In 2009, I saw this post Becoming really rich with C# showcasing the new features in C#  4.5 and was impressed with C# with its hybrid Object-Functional approach and collection APIs to give collection operations a SQL-like feel:

          var adjustedPrices =
              e.Result
              .Split(new[] { '\n' })
              .Skip(1)
              .Select(l => l.Split(new[] { ',' }))
              .Where(l => l.Length == 7)
              .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));

          Now lets do that in 7 lines of code in Java 5, 6, or 7. Er no, sorry.

          At the time, I was learning Scala. So I translated Becoming really rich with C# into Scala and compared them side by side. Result:  See http://quoiquilensoit.blogspot.com/2009/10/becoming-really-rich-with-scala.html The result surprised me. I thought C# held up pretty well overall.

          So, a full four years later, Oracle owns Java and Java8 is out with some of the same features that C# was offering in dot net 4.5 in 2010. There is obvious missing stuff that Java 8 still does not have LINQ,  Output parameters. Vars. Tuples. Optional/Nullable numerics. But I tried the same exercise, trying to keep in the spirit of the C# code.

          The code is on github: https://github.com/azzoti/get-rich-with-java8

          git clone https://github.com/azzoti/get-rich-with-java8.git

          Its an eclipse maven project, but you can run straight from the command line with:

          mvn exec:java





          Original C#Java 8
          See notes after the table

          using System;
          using System.Collections.Generic;
          using System.Linq;
          using System.Text;
          using System.Net;
          using System.Threading;
          using System.Threading.Tasks;
          using System.IO;

          namespace ETFAnalyzer {






















          struct Event {
          internal Event(DateTime date, double price) { Date = date; Price = price; }
          internal readonly DateTime Date;
          internal readonly double Price;
          }










          class Summary {
          internal Summary(string ticker, string name, string assetClass,
          string assetSubClass, double? weekly, double? fourWeeks,
          double? threeMonths, double? sixMonths, double? oneYear,
          double? stdDev, double price, double? mav200) {
          Ticker = ticker;
          Name = name;
          AssetClass = assetClass;
          AssetSubClass = assetSubClass;
          // Abracadabra ...
          LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4;
          Weekly = weekly;
          FourWeeks = fourWeeks;
          ThreeMonths = threeMonths;
          SixMonths = sixMonths;
          OneYear = oneYear;
          StdDev = stdDev;
          Mav200 = mav200;
          Price = price;
          }
          internal readonly string Ticker;
          internal readonly string Name;
          internal readonly string AssetClass;
          internal readonly string AssetSubClass;
          internal readonly double? LRS;
          internal readonly double? Weekly;
          internal readonly double? FourWeeks;
          internal readonly double? ThreeMonths;
          internal readonly double? SixMonths;
          internal readonly double? OneYear;
          internal readonly double? StdDev;
          internal readonly double? Mav200;
          internal double Price;

          internal static void Banner() {
          Console.Write("{0,-6}", "Ticker");
          Console.Write("{0,-50}", "Name");
          Console.Write("{0,-12}", "Asset Class");
          Console.Write("{0,4}", "RS");
          Console.Write("{0,4}", "1Wk");
          Console.Write("{0,4}", "4Wk");
          Console.Write("{0,4}", "3Ms");
          Console.Write("{0,4}", "6Ms");
          Console.Write("{0,4}", "1Yr");
          Console.Write("{0,6}", "Vol");
          Console.WriteLine("{0,2}", "Mv");
          }

          internal void Print() {

          Console.Write("{0,-6}", Ticker);
          Console.Write("{0,-50}", new String(Name.Take(48).ToArray()));
          Console.Write("{0,-12}", new String(AssetClass.Take(10).ToArray()));
          Console.Write("{0,4:N0}", LRS * 100);
          Console.Write("{0,4:N0}", Weekly * 100);
          Console.Write("{0,4:N0}", FourWeeks * 100);
          Console.Write("{0,4:N0}", ThreeMonths * 100);
          Console.Write("{0,4:N0}", SixMonths * 100);
          Console.Write("{0,4:N0}", OneYear * 100);
          Console.Write("{0,6:N0}", StdDev * 100);
          if (Price <= Mav200)
          Console.WriteLine("{0,2}", "X");
          else
          Console.WriteLine();
          }
          }

          class TimeSeries {
          internal readonly string Ticker;
          readonly DateTime _start;
          readonly Dictionary<DateTime, double> _adjDictionary;
          readonly string _name;
          readonly string _assetClass;
          readonly string _assetSubClass;

          internal TimeSeries(string ticker, string name, string assetClass, string assetSubClass, IEnumerable<event> events) {
          Ticker = ticker;
          _name = name;
          _assetClass = assetClass;
          _assetSubClass = assetSubClass;
          _start = events.Last().Date;
          _adjDictionary = events.ToDictionary(e => e.Date, e => e.Price);
          }










          bool GetPrice(DateTime when, out double price, out double shift) {
          // To nullify the effect of hours/min/sec/millisec being different from 0
          when = new DateTime(when.Year, when.Month, when.Day);
          var found = false;
          shift = 1;
          double aPrice = 0;
          while (when >= _start && !found) {
          if (_adjDictionary.TryGetValue(when, out aPrice)) {
          found = true;
          }
          when = when.AddDays(-1);
          shift -= 1;
          }
          price = aPrice;
          return found;
          }

          double? GetReturn(DateTime start, DateTime end) {
          var startPrice = 0.0;
          var endPrice = 0.0;
          var shift = 0.0;
          var foundEnd = GetPrice(end, out endPrice, out shift);
          var foundStart = GetPrice(start.AddDays(shift), out startPrice, out shift);
          if (!foundStart || !foundEnd)
          return null;
          else
          return endPrice / startPrice - 1;
          }

          internal double? LastWeekReturn() {
          return GetReturn(DateTime.Now.AddDays(-7), DateTime.Now);
          }
          internal double? Last4WeeksReturn() {
          return GetReturn(DateTime.Now.AddDays(-28), DateTime.Now);
          }
          internal double? Last3MonthsReturn() {
          return GetReturn(DateTime.Now.AddMonths(-3), DateTime.Now);
          }
          internal double? Last6MonthsReturn() {
          return GetReturn(DateTime.Now.AddMonths(-6), DateTime.Now);
          }
          internal double? LastYearReturn() {
          return GetReturn(DateTime.Now.AddYears(-1), DateTime.Now);
          }






          internal double? StdDev() {
          var now = DateTime.Now;
          now = new DateTime(now.Year, now.Month, now.Day);
          var limit = now.AddYears(-3);
          var rets = new List<double>();
          while (now >= _start.AddDays(12) && now >= limit) {
          var ret = GetReturn(now.AddDays(-7), now);
          rets.Add(ret.Value);
          now = now.AddDays(-7);
          }
          var mean = rets.Average();
          var variance = rets.Select(r => Math.Pow(r - mean, 2)).Sum();
          var weeklyStdDev = Math.Sqrt(variance / rets.Count);
          return weeklyStdDev * Math.Sqrt(40);
          }
          internal double? MAV200() {
          return _adjDictionary.ToList()
          .OrderByDescending(k => k.Key)
          .Take(200).Average(k => k.Value);
          }
          internal double TodayPrice() {
          var price = 0.0;
          var shift = 0.0;
          GetPrice(DateTime.Now, out price, out shift);
          return price;
          }
          internal Summary GetSummary() {
          return new Summary(Ticker, _name, _assetClass, _assetSubClass,
          LastWeekReturn(), Last4WeeksReturn(), Last3MonthsReturn(),
          Last6MonthsReturn(), LastYearReturn(), StdDev(), TodayPrice(),
          MAV200());
          }
          }

          class Program {

          static string CreateUrl(string ticker, DateTime start, DateTime end)
          {
          return @"http://ichart.finance.yahoo.com/table.csv?s=" + ticker +
          "&a="+(start.Month - 1).ToString()+"&b="+start.Day.ToString()+"&c="+start.Year.ToString() +
          "&d="+(end.Month - 1).ToString()+"&e="+end.Day.ToString()+"&f="+end.Year.ToString() +
          "&g=d&ignore=.csv";
          }

          static void Main(string[] args) {
          // If you rise this above 5 you tend to get frequent connection closing on my machine
          // I'm not sure if it is msft network or yahoo web service
          ServicePointManager.DefaultConnectionLimit = 10;

          var tickers =
          File.ReadAllLines("ETFTest.csv")
          .Skip(1)
          .Select(l => l.Split(new[] { ',' }))
          .Where(v => v[2] != "Leveraged")
          .Select(values => Tuple.Create(values[0], values[1], values[2], values[3]))
          .ToArray();

          var len = tickers.Length;

          var start = DateTime.Now.AddYears(-2);
          var end = DateTime.Now;
          var cevent = new CountdownEvent(len);
          var summaries = new Summary[len];

          for(var i = 0; i < len; i++) {
          var t = tickers[i];
          var url = CreateUrl(t.Item1, start, end);
          using (var webClient = new WebClient()) {
          webClient.DownloadStringCompleted +=
          new DownloadStringCompletedEventHandler(downloadStringCompleted);
          webClient.DownloadStringAsync(new Uri(url), Tuple.Create(t, cevent, summaries, i));
          }
          }

          cevent.Wait();
          Console.WriteLine("\n");

          var top15perc =
          summaries
          .Where(s => s.LRS.HasValue)
          .OrderByDescending(s => s.LRS)
          .Take((int)(len * 0.15));
          var bottom15perc =
          summaries
          .Where(s => s.LRS.HasValue)
          .OrderBy(s => s.LRS)
          .Take((int)(len * 0.15));

          Console.WriteLine();
          Summary.Banner();
          Console.WriteLine("TOP 15%");
          foreach(var s in top15perc)
          s.Print();

          Console.WriteLine();
          Console.WriteLine("Bottom 15%");
          foreach (var s in bottom15perc)
          s.Print();

          }

          static void downloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) {
          var bigTuple = (Tuple<Tuple<string, string, string, string>, CountdownEvent, Summary[], int>)e.UserState;
          var tuple = bigTuple.Item1;
          var cevent = bigTuple.Item2;
          var summaries = bigTuple.Item3;
          var i = bigTuple.Item4;
          var ticker = tuple.Item1;
          var name = tuple.Item2;
          var asset = tuple.Item3;
          var subAsset = tuple.Item4;

          if (e.Error == null) {
          var adjustedPrices =
          e.Result
          .Split(new[] { '\n' })
          .Skip(1)
          .Select(l => l.Split(new[] { ',' }))
          .Where(l => l.Length == 7)
          .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));

          var timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);
          summaries[i] = timeSeries.GetSummary();
          cevent.Signal();
          Console.Write("{0} ", ticker);
          } else {
          Console.WriteLine("[{0} ERROR] ", ticker);
          summaries[i] = new Summary(ticker,name,"ERROR","ERROR",0,0,0,0,0,0,0,0);
          cevent.Signal();
          }
          }
          }
          }

          package etf.analyzer;

          import static java.lang.System.out;
          import static java.util.Comparator.comparing;
          import static java.util.stream.Collectors.*;

          import java.io.IOException;
          import java.nio.file.*;
          import java.time.LocalDate;
          import java.time.format.DateTimeFormatter;
          import java.util.*;
          import java.util.Map.Entry;
          import java.util.concurrent.CountDownLatch;
          import java.util.stream.Stream;

          class Event {
          public Event(LocalDate date, double price) {
          this.date = date;
          this.price = price;
          }
          public LocalDate getDate() {
          return date;
          }
          public double getPrice() {
          return price;
          }
          private LocalDate date;
          private double price;
          }
          class Summary {
          public Summary(String ticker, String name, String assetClass,
          String assetSubClass, OptionalDouble weekly, OptionalDouble fourWeeks,
          OptionalDouble threeMonths, OptionalDouble sixMonths, OptionalDouble oneYear,
          OptionalDouble stdDev, double price, OptionalDouble mav200) {
          this.ticker = ticker;
          this.name = name;
          this.assetClass = assetClass;
          // this.assetSubClass = assetSubClass;
          // Abracadabra ...
          this.lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d));
          this.weekly = weekly;
          this.fourWeeks = fourWeeks;
          this.threeMonths = threeMonths;
          this.sixMonths = sixMonths;
          this.oneYear = oneYear;
          this.stdDev = stdDev;
          this.mav200 = mav200;
          this.price = price;
          }
          private String ticker;
          private String name;
          private String assetClass;
          // private String assetSubClass;
          public OptionalDouble lrs;
          private OptionalDouble weekly;
          private OptionalDouble fourWeeks;
          private OptionalDouble threeMonths;
          private OptionalDouble sixMonths;
          private OptionalDouble oneYear;
          private OptionalDouble stdDev;
          private OptionalDouble mav200;
          private double price;

          static void banner() {
          out.printf("%-6s", "Ticker");
          out.printf("%-50s", "Name");
          out.printf("%-12s", "Asset Class");
          out.printf("%4s", "RS");
          out.printf("%4s", "1Wk");
          out.printf("%4s", "4Wk");
          out.printf("%4s", "3Ms");
          out.printf("%4s", "6Ms");
          out.printf("%4s", "1Yr");
          out.printf("%6s", "Vol");
          out.printf("%2s\n", "Mv");
          }
          void print() {
          out.printf("%-6s", ticker);
          out.printf("%-50s", name);
          out.printf("%-12s", assetClass);
          out.printf("%4.0f", lrs.orElse(0.0d) * 100);
          out.printf("%4.0f", weekly.orElse(0.0d) * 100);
          out.printf("%4.0f", fourWeeks.orElse(0.0d) * 100);
          out.printf("%4.0f", threeMonths.orElse(0.0d) * 100);
          out.printf("%4.0f", sixMonths.orElse(0.0d) * 100);
          out.printf("%4.0f", oneYear.orElse(0.0d) * 100);
          out.printf("%6.0f", stdDev.orElse(0.0d) * 100);
          if (price <= mav200.orElse(-Double.MAX_VALUE))
          out.printf("%2s\n", "X");
          else
          out.println();
          }
          }

          class TimeSeries {
          private String ticker;
          private LocalDate _start;
          private Map<LocalDate, Double> _adjDictionary;
          private String _name;
          private String _assetClass;
          private String _assetSubClass;

          public TimeSeries(String ticker, String name, String assetClass, String assetSubClass, List<Event> events) {
          this.ticker = ticker;
          this._name = name;
          this._assetClass = assetClass;
          this._assetSubClass = assetSubClass;
          this._adjDictionary = events.stream().collect(toMap(Event::getDate, Event::getPrice));
          this._start = events.size() - 1 > 0 ? events.get(events.size() - 1).getDate() : LocalDate.now().minusYears(99);
          }

          private static final class FindPriceAndShift {
          public FindPriceAndShift(boolean found, double aPrice, int shift) {
          this.found = found;
          this.price = aPrice;
          this.shift = shift;
          }
          private boolean found;
          private double price;
          private int shift;
          }

          private FindPriceAndShift getPrice(LocalDate when) {
          boolean found = false;
          int shift = 1;
          double aPrice = 0.0d;
          while ((when.equals(_start) || when.isAfter(_start)) && !found) {
          if (found = _adjDictionary.containsKey(when)) {
          aPrice = _adjDictionary.get(when);
          }
          when = when.minusDays(1);
          shift -= 1;
          }
          return new FindPriceAndShift(found, aPrice, shift);
          }

          OptionalDouble getReturn(LocalDate start, LocalDate endDate) {
          FindPriceAndShift foundEnd = getPrice(endDate);
          FindPriceAndShift foundStart = getPrice(start.plusDays(foundEnd.shift));
          if (!foundStart.found || !foundEnd.found)
          return OptionalDouble.empty();
          else {
          return OptionalDouble.of(foundEnd.price / foundStart.price - 1.0d);
          }
          }

          private OptionalDouble lastWeekReturn() {
          return getReturn(LocalDate.now().minusDays(7), LocalDate.now());
          }
          private OptionalDouble last4WeeksReturn() {
          return getReturn(LocalDate.now().minusDays(28), LocalDate.now());
          }
          private OptionalDouble last3MonthsReturn() {
          return getReturn(LocalDate.now().minusMonths(3), LocalDate.now());
          }
          private OptionalDouble last6MonthsReturn() {
          return getReturn(LocalDate.now().minusMonths(6), LocalDate.now());
          }
          private OptionalDouble lastYearReturn() {
          return getReturn(LocalDate.now().minusYears(1), LocalDate.now());
          }
          private Double sum(Collection<Double> d) {
          return d.parallelStream().reduce(0d, Double::sum);
          }
          private Double avg(Collection<Double> d) {
          return sum(d) / d.size();
          }
          private OptionalDouble stdDev() {
          LocalDate now = LocalDate.now();
          LocalDate limit = now.minusYears(3);
          List<Double> rets = new ArrayList<>();
          while (now.compareTo(_start.plusDays(12)) >= 0 && now.compareTo(limit) >= 0) {
          OptionalDouble ret = getReturn(now.minusDays(7), now);
          rets.add(ret.orElse(0d));
          now = now.minusDays(7);
          }
          Double mean = avg(rets);
          Double variance = avg(rets.parallelStream().map(r -> Math.pow(r - mean, 2)).collect(toList()));
          Double weeklyStdDev = Math.sqrt(variance);
          return OptionalDouble.of(weeklyStdDev * Math.sqrt(40));
          }
          private OptionalDouble MAV200() {
          return OptionalDouble.of(
          _adjDictionary.entrySet().parallelStream()
          .sorted(comparing((Entry<LocalDate,Double> p) -> p.getKey()).reversed())
          .limit(200).mapToDouble(e -> e.getValue()).average().orElse(0d)
          );
          }
          private double todayPrice() {
          return getPrice(LocalDate.now()).price;
          }
          public Summary getSummary() {
          return new Summary(ticker, _name, _assetClass, _assetSubClass,
          lastWeekReturn(), last4WeeksReturn(), last3MonthsReturn(),
          last6MonthsReturn(), lastYearReturn(), stdDev(), todayPrice(),
          MAV200());
          }
          }

          public class Program {

          static String createUrl(String ticker, LocalDate start, LocalDate end) {
          return "http://ichart.finance.yahoo.com/table.csv?s=" + ticker + "&a="
          + (start.getMonthValue() - 1) + "&b=" + start.getDayOfMonth()
          + "&c=" + start.getYear() + "&d=" + (end.getMonthValue() - 1)
          + "&e=" + end.getDayOfMonth() + "&f=" + end.getYear()
          + "&g=d&ignore=.csv";
          }

          public static void main(String[] args) throws IOException, InterruptedException {

          List<String[]> tickers = Files.lines(FileSystems.getDefault().getPath("ETFs.csv"))
          .skip(1)
          .parallel()
          .map(line -> line.split(",", 4))
          .filter(v -> !v[2].equals("Leveraged"))
          .collect(toList());

          int len = tickers.size();

          LocalDate start = LocalDate.now().minusYears(2);
          LocalDate end = LocalDate.now();
          CountDownLatch cevent = new CountDownLatch(len);
          Summary[] summaries = new Summary[len];

          try (WebClient webClient = new WebClient()) {
          for (int i = 0; i < len; i++) {
          String[] t = tickers.get(i);
          final int index = i;
          webClient.downloadStringAsync(createUrl(t[0], start, end), result -> {
          summaries[index] = downloadStringCompleted(t[0], t[1], t[2], t[3], result);
          cevent.countDown();
          });
          }
          cevent.await();
          }

          Stream<Summary> top15perc =
          Arrays.stream(summaries)
          .filter(s -> s.lrs.isPresent())
          .sorted(comparing((Summary p) -> p.lrs.get()).reversed())
          .limit((int)(len * 0.15));
          Stream<Summary> bottom15perc =
          Arrays.stream(summaries)
          .filter(s -> s.lrs.isPresent())
          .sorted(comparing((Summary p) -> p.lrs.get()))
          .limit((int)(len * 0.15));

          System.out.println();
          Summary.banner();
          System.out.println("TOP 15%");
          top15perc.forEach(
          s -> s.print());

          System.out.println();
          Summary.banner();
          System.out.println("BOTTOM 15%");
          bottom15perc.forEach(
          s -> s.print());

          }

          public static Summary downloadStringCompleted(String ticker, String name, String asset, String subAsset,
          DownloadStringAsyncCompletedArgs e
          ) {
          Summary summary;
          if (e.getError() == null) {
          List<Event> adjustedPrices =
          Arrays.stream(e.getResult().split("\n"))
          .skip(1)
          .parallel()
          .map(line -> line.split(",", 7))
          .filter(l -> l.length == 7)
          .map(v -> new Event(LocalDate.parse(v[0], DateTimeFormatter.ISO_LOCAL_DATE), Double.valueOf(v[6]))).collect(toList());
          TimeSeries timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);
          summary = timeSeries.getSummary();
          } else {
          System.err.printf("[%s ERROR]", ticker);
          final OptionalDouble zero = OptionalDouble.of(0d);
          summary = new Summary(ticker, name, "ERROR", "ERROR", zero, zero, zero, zero, zero, zero, 0d, zero);
          }
          return summary;
          }
          }

          Some observations:
          • The code depends on the yahoo to get historical stock prices and sometimes Yahoo is not available for stock prices. Wait five minutes and run the program again. 
          • The Java code is much much faster than the C# code, but it is going to yahoo to get historical stock prices which is going to be the limiting factor.  I don't think the C# should be slower than the Java code but it is and I'm not sure why it is. I'm pretty sure the poor C# performance is to do with the dot net WebClient configuration but I might be wrong.
          • In Java 8, just to show how easy it is, I've used parallelStream() and .parallel() in a couple of places, but these can be removed for the equivalent functionality. I can see no noticeable difference in performance with or without these calls when using an 8 core machine. As I said above I believe that the limiting factor is going to yahoo to get historical stock prices. There is not that much number crunching to do and I suspect the time taken to do it pales into insignificance next to the internet fetch time. Doing the calculations in parallel just isn't worth it. But its good to see how easy it is to parallelize work if you want to. Being able to simply say Collection.parallelStream() and Stream.parallel() is incredible if you find a sensible use case for it.
              • The Java 8 code is a little longer than the C# code. In Java 7, I'm guessing the code would be at least two times longer and very very ugly if written in a similar style. The Java8 code is not as concise as C# or Scala but at least its in the same ball park. Partly this is due to Java POJO boilerplate (e.g. the FindPriceAndShift class and the Event class getter and setters) but thats is no big deal (IMO). The Java code is also more verbose because types must be declared unlike in C# where you can use "var" instead of a type declaration and usually the C# compiler infers what you mean. 
              • Tuples. C# has Tuples, Scala has Tuples but apparently their use is the spawn of satan and civilization will collapse if they are used in Java even to hold temporary results when parsing comma separated values into another class. (Oracle will be removing HashMap from Java9 apparently for similar reasons ;)) In order not to be arrested by the Java thought police I avoided succumbing to this. The C# code uses them, but I've managed to avoid them.
              • Output parameters.  In my scala translation in 2009, my translation to Scala used a return tuple instead of the C# output parameters (which I personally found confusing in the C# algorithm). In the Java 8 version I used a POJO FindPriceAndShift rather than sell my soul to wicked tuple monster.
              • The C# code uses the "double?" type which is a double that can have an empty value and it means you can write LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4 and any of fourWeeksthreeMonthssixMonths, and oneYear can be empty without causing a null pointer exception etc.  Java 8 does ship with OptionalDouble. But, strangely, you can't say a.add(b).add(c).divide(d). So I wrote an OptionalDouble class which does do this, so you can say lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d). If you look at the code you can see its almost trivially simple. Writing lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d) is  not very pretty compared to the C# or Scala equivalent but a lot of Java people are used to doing this method chaining with BigDecimal: but with OptionalDouble now it can be null/emptyValue safe. (The same thing can easily be done to create a an OptionalBigDecimal class obviously.) (And this OptionalDouble stuff could easily have been done in Java7 too.)
              • Java does not have a C# style WebClient, so I have taken the open source jetty http client and wrapped it in a simple wrapper to make it look like the C# WebClient. See git hub for the WbClient class.
              • Java lives on open source. If the C# code is slow because the dot net WebClient is doing something stupid, its hard to find out as its closed source. If the Jetty's Java http client is  broken, you can debug the source or switch to apache's http client: the best open source libraries emerge through natural selection.
              • There is some surprising missing functionality from the Stream and or Collections. There is no Zip or takeWhile or dropWhile for sequential streams. I'm guessing Java9, guava and others will fill this gap pretty fast.
                • When I showed the code below to an experienced colleague who has only used Java <= 6 he said "that looks like C++ to me: thats completely unmaintainable". Sigh.
                  • Stream<summary> top15perc =
                    Arrays.stream(summaries)
                    .filter(s -> s.lrs.isPresent())
                    .sorted(comparing((Summary p) -> p.lrs.get()).reversed())
                    .limit((int)(len * 0.15)); </summary>



                  by Tim Azzopardi (noreply@blogger.com) at March 31, 2014 12:25 AM

                  March 23, 2014

                  scala-lang.org

                  Scala 2.10.4 is now available!

                  We are very happy to announce the final release of Scala 2.10.4!

                  The release is available for download from scala-lang.org or from Maven Central.

                  The Scala team and contributors fixed 33 issues since 2.10.3!

                  In total, 36 RC1 pull requests, 12 RC2 pull requests and 3 RC3 pull requests were merged on GitHub.

                  Known Issues

                  Before reporting a bug, please have a look at these known issues.

                  Scala IDE for Eclipse

                  The Scala IDE with this release built right in is available through the following update-site for Eclipse 4.2/4.3 (Juno/Kepler):

                  Have a look at the getting started guide for more info.

                  New features in the 2.10 series

                  Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

                  Experimental features

                  The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

                  A big thank you to all the contributors!

                  #Author
                  26<notextile>Jason Zaugg</notextile>
                  15<notextile>Adriaan Moors</notextile>
                  5<notextile>Eugene Burmako</notextile>
                  3<notextile>A. P. Marki</notextile>
                  3<notextile>Simon Schaefer</notextile>
                  3<notextile>Mirco Dotta</notextile>
                  3<notextile>Luc Bourlier</notextile>
                  2<notextile>Paul Phillips</notextile>
                  2<notextile>François Garillot</notextile>
                  1<notextile>Mark Harrah</notextile>
                  1<notextile>Vlad Ureche</notextile>
                  1<notextile>James Ward</notextile>
                  1<notextile>Heather Miller</notextile>
                  1<notextile>Roberto Tyley</notextile>

                  Commits and the issues they fixed since v2.10.3

                  Issue(s)CommitMessage
                  SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
                  SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
                  SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
                  SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
                  SI-6443, SI-81431baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
                  SI-81529df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
                  SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  SI-7912006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
                  SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  Complete commit list!

                  shaTitle
                  5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
                  8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
                  d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile>
                  806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
                  ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
                  cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile>
                  1baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
                  9df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
                  c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
                  255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
                  2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
                  47562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
                  006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  27a3860<notextile>Update README, include doc/licenses in distro</notextile>
                  139ba9d<notextile>Add attribution for Typesafe.</notextile>
                  e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
                  dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
                  f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
                  5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  8d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
                  7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
                  3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
                  3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
                  04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
                  1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
                  31ead67<notextile>IDE needs swing/actors/continuations</notextile>
                  852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
                  40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
                  ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
                  d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  6045a05<notextile>Fix completion after application with implicit arguments</notextile>
                  075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  7b56021<notextile>Disable tests for SI-7020</notextile>
                  8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
                  2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
                  733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  March 23, 2014 11:00 PM

                  Scala 2.10.4 is now available!

                  We are very happy to announce the final release of Scala 2.10.4!

                  The release is available for download from scala-lang.org or from Maven Central.

                  The Scala team and contributors fixed 33 issues since 2.10.3!

                  In total, 36 RC1 pull requests, 12 RC2 pull requests and 3 RC3 pull requests were merged on GitHub.

                  Known Issues

                  Before reporting a bug, please have a look at these known issues.

                  Scala IDE for Eclipse

                  The Scala IDE with this release built right in is available through the following update-site for Eclipse 4.2/4.3 (Juno/Kepler):

                  Have a look at the getting started guide for more info.

                  New features in the 2.10 series

                  Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

                  Experimental features

                  The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

                  A big thank you to all the contributors!

                  #Author
                  26<notextile>Jason Zaugg</notextile>
                  15<notextile>Adriaan Moors</notextile>
                  5<notextile>Eugene Burmako</notextile>
                  3<notextile>A. P. Marki</notextile>
                  3<notextile>Simon Schaefer</notextile>
                  3<notextile>Mirco Dotta</notextile>
                  3<notextile>Luc Bourlier</notextile>
                  2<notextile>Paul Phillips</notextile>
                  2<notextile>François Garillot</notextile>
                  1<notextile>Mark Harrah</notextile>
                  1<notextile>Vlad Ureche</notextile>
                  1<notextile>James Ward</notextile>
                  1<notextile>Heather Miller</notextile>
                  1<notextile>Roberto Tyley</notextile>

                  Commits and the issues they fixed since v2.10.3

                  Issue(s)CommitMessage
                  SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
                  SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
                  SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
                  SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
                  SI-6443, SI-81431baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
                  SI-81529df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
                  SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  SI-7912006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
                  SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  Complete commit list!

                  shaTitle
                  5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
                  8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
                  d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile>
                  806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
                  ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
                  cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile>
                  1baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
                  9df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
                  c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
                  255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
                  2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
                  47562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
                  006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  27a3860<notextile>Update README, include doc/licenses in distro</notextile>
                  139ba9d<notextile>Add attribution for Typesafe.</notextile>
                  e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
                  dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
                  f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
                  5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  8d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
                  7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
                  3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
                  3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
                  04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
                  1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
                  31ead67<notextile>IDE needs swing/actors/continuations</notextile>
                  852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
                  40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
                  ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
                  d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  6045a05<notextile>Fix completion after application with implicit arguments</notextile>
                  075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  7b56021<notextile>Disable tests for SI-7020</notextile>
                  8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
                  2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
                  733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  March 23, 2014 11:00 PM

                  March 19, 2014

                  scala-lang.org

                  Scala 2.11.0-RC3 is now available!

                  We are very pleased to announce Scala 2.11.0-RC3, the second (sic) release candidate of Scala 2.11.0! Download it now from scala-lang.org or via Maven Central.

                  There won’t be an RC2 release because we missed a blocker issue (thanks for the reminder, Chee Seng!). Unfortunately, the mistake wasn’t caught until after the tag was pushed. Jason quickly fixed the bug, which is the only difference between RC3 and RC2.

                  Please do try out this release candidate to help us find any serious regressions before the final release. The next release candidate (or the final) will be cut on Friday March 28, if there are no unresolved blocker bugs at noon (PST). Our goal is to have the next release be the final – please help us make sure there are no important regressions!

                  Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do no guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

                  For production use, we recommend the latest stable release, 2.10.4.

                  The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

                  The Scala team and contributors fixed 601 bugs that are exclusive to Scala 2.11.0-RC3! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

                  Since the last RC, we fixed 54 issues via 37 merged pull requests.

                  A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

                  Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 111 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Denys Shabalin, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Ward, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

                  Thank you all very much.

                  If you find any errors or omissions in these relates notes, please submit a PR!

                  Reporting Bugs / Known Issues

                  Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

                  Before reporting a bug, please have a look at these known issues.

                  Scala IDE for Eclipse

                  The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

                  Available projects

                  The following Scala projects have already been released against 2.11.0-RC3! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

                  "org.scalacheck"         %% "scalacheck"         % "1.11.3"
                  "org.scalafx"            %% "scalafx"            % "1.0.0-R8"
                  "org.scalafx"            %% "scalafx"            % "8.0.0-R4"
                  "com.typesafe.akka"      %% "akka-actor"         % "2.3.0"
                  "com.github.scopt"       %% "scopt"              % "3.2.0"
                  "org.scalatest"          %% "scalatest"          % "2.1.2"
                  "org.specs2"             %% "specs2"             % "2.3.10"
                  "org.scalaz"             %% "scalaz-core"        % "7.0.6"
                  "org.scala-lang.modules" %% "scala-async"        % "0.9.0"

                  The following projects were released against 2.11.0-RC1, with an RC3 build hopefully following soon:

                  "io.argonaut"            %% "argonaut"           % "6.0.3"
                  "com.nocandysw"          %% "platform-executing" % "0.5.0"
                  "com.clarifi"            %% "f0"                 % "1.1.1"
                  "org.parboiled"          %% "parboiled-scala"    % "1.1.6"
                  "com.sksamuel.scrimage"  %% "scrimage"           % "1.3.16"

                  Cross-building with sbt 0.13

                  When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

                  Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

                  scalaVersion        := "2.11.0-RC3"
                  
                  crossScalaVersions  := Seq("2.11.0-RC3", "2.10.3")
                  
                  // add scala-xml dependency when needed (for Scala 2.11 and newer)
                  // this mechanism supports cross-version publishing
                  libraryDependencies := {
                    CrossVersion.partialVersion(scalaVersion.value) match {
                      case Some((2, scalaMajor)) if scalaMajor >= 11 =>
                        libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.0"
                      case _ =>
                        libraryDependencies.value
                    }
                  }

                  Important changes

                  For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

                  Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

                  We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

                  • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
                  • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

                  The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

                  • SI-6809 Case classes without a parameter list are no longer allowed.
                  • SI-7618 Octal number literals no longer supported.

                  Finally, some notable improvements and bug fixes:

                  • SI-7296 Case classes with > 22 parameters are now allowed.
                  • SI-3346 Implicit arguments of implicit conversions now guide type inference.
                  • SI-6240 Thread safety of reflection API.
                  • #3037 Experimental support for SAM synthesis.
                  • #2848 Name-based pattern-matching.
                  • SI-6169 Infer bounds of Java-defined existential types.
                  • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
                  • SI-5917 Improve public AST creation facilities.
                  • SI-8063 Expose much needed methods in public reflection/macro API.
                  • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

                  To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

                  Deprecations

                  Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

                  The following language “warts” have been deprecated:

                  • SI-7605 Procedure syntax (only under -Xfuture).
                  • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
                  • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
                  • SI-8035 Automatic insertion of () on missing argument lists.
                  • SI-6675 Auto-tupling in patterns.
                  • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
                  • SI-1503 Unsound type assumption for stable identifier and literal patterns.
                  • SI-7629 View bounds (under -Xfuture).

                  We’d like to emphasize the following library deprecations:

                  • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
                  • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
                  • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
                  • SI-3235 Deprecate round on Int and Long (#3581).
                  • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

                  Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

                  Binary Compatibility

                  When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

                  We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

                  Forwards and Back

                  We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

                  Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

                  These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

                  Concretely

                  Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

                  Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

                  Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

                  New features in the 2.11 series

                  This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

                  • Collections

                    • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
                    • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
                    • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
                    • List has improved performance on map, flatMap, and collect.
                    • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
                  • Modularization

                    • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
                    • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
                  • Reflection, macros and quasiquotes

                    • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
                    • See also this summary of the experimental side of the 2.11 development cycle.
                    • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
                  • Back-end

                    • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
                    • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
                    • Branch elimination through constant analysis #2214
                  • Compiler Performance

                    • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2-M2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
                    • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
                    • Improve performance of reflection SI-6638
                  • IDE * Numerous bug fixes and improvements!

                  • REPL

                  • Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val.

                  • Slimming down the compiler

                    • The experimental .NET backend has been removed from the compiler.
                    • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
                    • Search and destroy mission for ~5000 chunks of dead code. #1648

                  License clarification

                  Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

                  March 19, 2014 11:00 PM

                  March 13, 2014

                  Functional Jobs

                  Senior Engineer & Architect - Clojure, JavaScript and more at Q-Centrix (Full-time)

                  Help us build an amazing team from the ground up in San Diego! As Senior Engineer, you’ll lead selection for our full technology stack and architect services-oriented solutions to our most complex problems.

                  Our exciting technical challenges help reduce healthcare costs. You’ll:

                  • Apply machine learning to help hospitals allocate nursing resources effectively
                  • Build awesome, next-generation healthcare applications impacting quality of care
                  • Implement a stable and highly available infrastructure focused on security (HIPAA)
                  • Coordinate day-to-day work automatically for hundreds of internal operations staff
                  • Work with and contribute to the best open source tools, languages and frameworks

                  A general list of requirements:

                  • Several years web development or related software engineering experience
                  • Demonstrated success as a team lead or senior engineer and mentorship skills
                  • Expertise with at least one of: Clojure, Scala, JavaScript
                  • A test-first mentality with the ability to make stable, working products quickly
                  • Want to live or move to San Diego and help mold a new company’s culture

                  We will be a small team and we want people that love getting their hands dirty. We’re serious about building a great place to work in San Diego. Please get in touch to talk about some of our ideas for promoting engineer growth:

                  • Conference travel and attendance at least once yearly, more for presenters
                  • Host internal/external speakers for tech talks, possibly open to the public
                  • Allow for flexible work hours and the opportunity to work from home
                  • Start a book library in the office with company purchased books on a range of topics
                  • Set aside some time each week for experimental projects (i.e., 20% time)

                  We also offer competitive salaries, insurance, and a 401k plan. You will report directly to the VP of Technology & Development, a fellow engineer. Team management responsibilities are flexible depending on your preference.

                  About Q-Centrix

                  Formed in 2010, Q-Centrix provides outsourced clinical data abstraction, analysis, and reporting services to hospitals. Q-Centrix is the largest and fastest growing provider of quality related outsourcing services in the nation. A recent partnership with growth-focused private equity firm Sterling Partners gives Q-Centrix the resources and managerial expertise to continue growing at a rapid rate.

                  Get information on how to apply for this position.

                  March 13, 2014 08:18 PM

                  Clojure(Script) Web Application Developer at Q-Centrix (Full-time)

                  Healthcare is complex. It also offers amazing opportunities to build applications that make a significant impact directly to people's lives by helping hospitals improve the quality of care they deliver. At Q-Centrix, we extract and use quality-of-care data to drive results that matter. Today, our tools and services are helping hundreds of hospitals throughout the US take action to improve care.

                  As a team member, you'll help reduce healthcare costs and:

                  • Apply machine learning to help hospitals allocate nursing resources effectively
                  • Build awesome web applications based around Clojure APIs and ClojureScript front-ends (think Om)
                  • Coordinate day-to-day work automatically for hundreds of internal operations staff
                  • Work with and contribute to the best open source tools, languages and frameworks

                  A general list of requirements:

                  • Several years web development experience
                  • At least 6 months of experience with Clojure in a production setting
                  • Demonstrated success as a team lead or senior engineer with mentorship skills
                  • A test-first mentality with the ability to make stable, working products quickly
                  • We deal with protected health information so you must be located in the US
                  • Experience with both Java and JavaScript are nice to have

                  We will be a small team and we want people that love getting their hands dirty. We’re serious about building a great place to work in San Diego. Please get in touch to talk about some of our ideas for promoting engineer growth.

                  We also offer competitive salaries, insurance, and a 401k plan. You will report directly to the VP of Technology & Development, a fellow engineer.

                  About Q-Centrix

                  Formed in 2010, Q-Centrix provides outsourced clinical data abstraction, analysis, and reporting services to hospitals. Q-Centrix is the largest and fastest growing provider of quality related outsourcing services in the nation. A recent partnership with growth-focused private equity firm Sterling Partners gives Q-Centrix the resources and managerial expertise to continue growing at a rapid rate.

                  Get information on how to apply for this position.

                  March 13, 2014 08:18 PM

                  March 05, 2014

                  scala-lang.org

                  Scala 2.11.0-RC1 is now available!

                  We are very pleased to announce the first release candidate of Scala 2.11.0! Download it now from scala-lang.org or via Maven Central.

                  Please do try out this release candidate to help us find any serious regressions before the final release. The next release candidate will be cut on Monday March 17, if there are no unresolved blocker bugs at noon (PST). Subsequent RCs will be released on a weekly schedule, with Monday at noon (PST) being the cut-off for blocker bug reports. Our goal is to have no more than three RCs for this release – please help us achieve this by testing your project soon!

                  Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do no guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

                  For production use, we recommend the latest stable release, 2.10.3 (2.10.4 final coming soon).

                  The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

                  The Scala team and contributors fixed 544 bugs that are exclusive to Scala 2.11.0-RC1! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

                  Since the last milestone, we fixed 133 issues via 154 merged pull requests.

                  A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

                  Concretely, between Jan 2013 and today, 69 contributors have helped improve Scala!

                  Our gratitude goes to @paulp, @som-snytt, @soc, @JamesIry, @Ichoran, @vigdorchik, @kzys, @Blaisorblade, @rjolly, @gourlaysama, @xuwei-k, @sschaef, @rklaehn, @lexspoon, @folone, @qerub, @etaty, @ViniciusMiana, @ScrapCodes, @pavelpavlov, @jedesah, @ihji, @harrah, @aztek, @clhodapp, @vy, @eed3si9n, @mergeconflict, @greenrd, @rjfwhite, @danielhopkins, @khernyo, @u-abramchuk, @mt2309, @ivmaykov, @yllan, @jrudolph, @jannic, @non, @dcsobral, @chuvoks, @rtyley.

                  With special thanks to the team at EPFL: @xeno-by, @densh, @magarciaEPFL, @VladimirNik, @lrytz, @VladUreche, @heathermiller, @sjrd, @hubertp, @OlivierBlanvillain, @namin, @cvogt, @vjovanov.

                  If you find any errors or omissions in these relates notes, please submit a PR!

                  Reporting Bugs / Known Issues

                  Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

                  Before reporting a bug, please have a look at these known issues.

                  Scala IDE for Eclipse

                  The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

                  Available projects

                  The following Scala projects have already been released against 2.11.0-RC1! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

                  "org.scalacheck"    %% "scalacheck"         % "1.11.3"
                  "org.scalafx"       %% "scalafx"            % "1.0.0-R8"
                  "org.scalatest"     %% "scalatest"          % "2.1.0"
                  "org.specs2"        %% "specs2"             % "2.3.9"
                  "com.typesafe.akka" %% "akka-actor"         % "2.3.0-RC4"
                  "org.scalaz"        %% "scalaz-core"        % "7.0.6"
                  "com.nocandysw"     %% "platform-executing" % "0.5.0"

                  NOTE: RC1 ships with akka-actor 2.3.0-RC4 (the final is out now, but wasn’t yet available when RC1 was cut). The next Scala 2.11 RC will ship with akka-actor 2.3.0 final.

                  Cross-building with sbt 0.13

                  When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

                  Here’s how we recommend handling this in sbt 0.13. For the full build, see @gkossakowski’s example.

                  scalaVersion        := "2.11.0-RC1"
                  
                  crossScalaVersions  := Seq("2.11.0-RC1", "2.10.3")
                  
                  // add scala-xml dependency when needed (for Scala 2.11 and newer)
                  // this mechanism supports cross-version publishing
                  libraryDependencies := {
                    CrossVersion.partialVersion(scalaVersion.value) match {
                      case Some((2, scalaMajor)) if scalaMajor >= 11 =>
                        libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.0"
                      case _ =>
                        libraryDependencies.value
                    }
                  }

                  Important changes

                  For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

                  Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

                  We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

                  • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.

                  The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

                  • SI-6809 Case classes without a parameter list are no longer allowed.
                  • SI-7618 Octal number literals no longer supported.

                  Finally, some notable improvements and bug fixes:

                  • SI-7296 Case classes with > 22 parameters are now allowed.
                  • SI-3346 Implicit arguments of implicit conversions now guide type inference.
                  • SI-6240 Thread safety of reflection API.
                  • #3037 Experimental support for SAM synthesis.
                  • #2848 Name-based pattern-matching.
                  • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues).
                  • SI-6169 Infer bounds of Java-defined existential types.
                  • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
                  • SI-5917 Improve public AST creation facilities.
                  • SI-8063 Expose much needed methods in public reflection/macro API.
                  • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

                  To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

                  Deprecations

                  Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

                  The following language “warts” have been deprecated:

                  • SI-7605 Procedure syntax (only under -Xfuture).
                  • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
                  • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
                  • SI-8035 Automatic insertion of () on missing argument lists.
                  • SI-6675 Auto-tupling in patterns.
                  • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
                  • SI-1503 Unsound type assumption for stable identifier and literal patterns.
                  • SI-7629 View bounds (under -Xfuture).

                  We’d like to emphasize the following library deprecations:

                  • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
                  • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
                  • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
                  • SI-3235 Deprecate round on Int and Long (#3581).
                  • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

                  Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

                  Binary Compatibility

                  When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

                  We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

                  Forwards and Back

                  We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

                  Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

                  These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

                  Concretely

                  Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

                  Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

                  Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

                  New features in the 2.11 series

                  This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

                  • Collections

                    • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
                    • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
                    • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
                    • List has improved performance on map, flatMap, and collect.
                    • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
                  • Modularization

                    • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
                    • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
                  • Reflection, macros and quasiquotes

                    • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
                    • See also this summary of the experimental side of the 2.11 development cycle.
                    • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
                  • Back-end

                    • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
                    • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
                    • Branch elimination through constant analysis #2214
                  • Compiler Performance

                    • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2-M2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
                    • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
                    • Improve performance of reflection SI-6638
                  • IDE * Numerous bug fixes and improvements!

                  • REPL

                  • Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val.

                  • Slimming down the compiler

                    • The experimental .NET backend has been removed from the compiler.
                    • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
                    • Search and destroy mission for ~5000 chunks of dead code. #1648

                  License clarification

                  Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

                  March 05, 2014 11:00 PM

                  Eric Torreborre

                  Streaming with previous and next

                  <status class="ok">

                  The Scalaz streams library is very attractive but it might feel unfamiliar because this is not your standard collection library.

                  This short post shows how to produce a stream of elements from another stream so that we get a triplet with: the previous element, the current element, the next element.

                  With Scala collections

                  With regular Scala collections, this is not too hard. We first create a list of all the previous elements. We create them as options because there will not be a previous element for the first element of the list. Then we create a list of next elements (also a list of options) and we zip everything with the input list:

                  </status><status class="ok">
                  def withPreviousAndNext[T] = (list: List[T]) => {
                  val previousElements = None +: list.map(Some(_)).dropRight(1)
                  val nextElements = list.drop(1).map(Some(_)) :+ None

                  // plus some flattening of the triplet
                  (previousElements zip list zip nextElements) map { case ((a, b), c) => (a, b, c) }
                  }

                  withPreviousAndNext(List(1, 2, 3))

                  > List((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

                  And streams

                  The code above can be translated pretty straightforwardly to scalaz processes:

                  </status><status class="ok">
                  def withPreviousAndNext[F[_], T] = (p: Process[F, T]) => {
                  val previousElements = emit(None) fby p.map(Some(_))
                  val nextElements = p.drop(1).map(Some(_)) fby emit(None)

                  (previousElements zip p zip nextElements).map { case ((a, b), c) => (a, b, c) }
                  }

                  val p1 = emitAll((1 to 3).toSeq).toSource
                  withPreviousAndNext(p1).runLog.run

                  > Vector((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

                  However what we generally want with streams is combinators which you can pipe onto a given Process. We want to write

                  def withPreviousAndNext[T]: Process1[T, T] = ???

                  val p1 = emitAll((1 to 3).toSeq).toSource
                  // produces the stream of (previous, current, next)
                  p1 |> withPreviousAndNext

                  How can we write this?

                  As a combinator

                  The trick is to use recursion to keep state and this is actually how many of the process1 combinators in the library are written. Let's see how this works on a simpler example. What happens if we just want a stream where elements are zipped with their previous value? Here is what we can write:

                  </status><status class="ok">
                  def withPrevious[T]: Process1[T, (Option[T], T)] = {

                  def go(previous: Option[T]): Process1[T, (Option[T], T)] =
                  await1[T].flatMap { current =>
                  emit((previous, current)) fby go(Some(current))
                  }

                  go(None)
                  }

                  val p1 = emitAll((1 to 3).toSeq).toSource
                  (p1 |> withPrevious).runLog.run

                  > Vector((None,1), (Some(1),2), (Some(2),3))

                  Inside the withPrevious method we recursively call go with the state we need to track. In this case we want to keep track of each previous element (and the first call is with None because there is no previous element for the first element of the stream). Then go awaits a new element. Each time there is a new element, we emit it, then call recursively go which is again going to wait for the next element, knowing that the new previous element is now current.

                  We can do something similar, but a bit more complex for withNext:

                  </status><status class="ok">
                  def withNext[T]: Process1[T, (T, Option[T])] = {
                  def go(current: Option[T]): Process1[T, (T, Option[T])] =
                  await1[T].flatMap { next =>
                  current match {
                  // accumulate the first element
                  case None => go(Some(next))
                  // if we have a current element, emit it with the next
                  // but when there's no more next, emit it with None
                  case Some(c) => (emit((c, Some(next))) fby go(Some(next))).orElse(emit((c, None)))
                  }
                  }

                  go(None)
                  }

                  val p1 = emitAll((1 to 3).toSeq).toSource
                  (p1 |> withNext).runLog.run

                  > Vector((1,Some(2)), (2,Some(3)), (2,None))

                  Here, we start by accumulating the first element of the stream, and then, when we get to the next, we emit both of them. And we make a recursive call remembering what is now the current element. But the process we return in flatMap has an orElse clause. It says "by the way, if you don't have anymore elements (no more next), just emit current and None".

                  Now with both withPrevious and withNext we can create a withPreviousAndNext process:

                  </status><status class="ok">
                  def withPreviousAndNext[T]: Process1[T, (Option[T], T, Option[T])] = {
                  def go(previous: Option[T], current: Option[T]): Process1[T, (Option[T], T, Option[T])] =
                  await1[T].flatMap { next =>
                  current.map { c =>
                  emit((previous, c, Some(next))) fby go(Some(c), Some(next))
                  }.getOrElse(
                  go(previous, Some(next))
                  ).orElse(emit((current, next, None)))
                  }
                  go(None, None)
                  }

                  val p1 = emitAll((1 to 3).toSeq).toSource
                  (p1 |> withPreviousAndNext).runLog.run
                  </status><status class="ok">

                  > Vector((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

                  The code is pretty similar but this time we keep track of both the "previous" element and the "current" one.

                  emit(last paragraph)

                  I hope this will help beginners like me to get started with scalaz-stream and I'd be happy if scalaz-stream experts out there leave comments if there's anything which can be improved (is there an effective way to combine withPrevious and withNext to get withPreviousAndNext?

                  I finally need to add that, in order to get proper performance/side-effect control for the withNext and withPreviousAndNext processes you need to use the lazy branch of scalaz-stream. It contains a fix for orElse which prevents it to be evaluated more than necessary.

                  </status>

                  by Eric (noreply@blogger.com) at March 05, 2014 10:06 PM

                  Functional Jobs

                  Developer at Northwestern University (Full-time)

                  The NetLogo team at Northwestern University (near Chicago) is hiring a full-time developer.

                  This might interest you if you want to:

                  • work with researchers at a university
                  • make things for kids, teachers, and scientists
                  • write Scala and CoffeeScript
                  • hack on compilers and interpreters
                  • do functional programming
                  • use the Play framework
                  • write open source software
                  • do your work on GitHub (https://github.com/NetLogo)

                  The CCL is looking for a full-time developer to work on NetLogo, focusing on designing web based modeling applications in Javascript, and programming (Scala and/or Java) of the NetLogo desktop application, including GUI work.

                  This Software Developer position is based at Northwestern University’s Center for Connected Learning and Computer-Based Modeling (CCL), working in a small collaborative development team in a university research group that also includes professors, postdocs, graduate students, and undergraduates, supporting the needs of multiple research projects. A major focus would be on development of NetLogo, an open-source modeling environment for both education and scientific research. CCL grants also involve development work on HubNet and other associated tools for NetLogo, including research and educational NSF grants involving building NetLogo-based science curricula for schools.

                  NetLogo is a programming language and agent-based modeling environment. The NetLogo language is a dialect of Logo/Lisp specialized for building agent-based simulations of natural and social phenomena. NetLogo has many thousands of users ranging from grade school students to advanced researchers. A collaborative extension of NetLogo, called HubNet, enables groups of participants to run participatory simulation activities in classrooms and distributed participatory simulations in social science research.

                  Specific Responsibilities:

                  • Collaborates with the NetLogo development team in designing features for NetLogo, HubNet and web-based versions of these applications;
                  • Writes code independently, and in the context of a team of experienced software engineers and principal investigator;
                  • Creates, updates and documents existing models using NetLogo, HubNet and web-based applications;
                  • Creates new such models;
                  • Supports development of new devices to interact with HubNet;
                  • Interacts with commercial and academic partners to help determine design and functional requirements for NetLogo and HubNet;
                  • Interacts with user community including responding to bug reports, questions, and suggestions, and interacting with open-source contributors;
                  • Performs data collection, organization, and summarization for projects;
                  • Assists with coordination of team activities;
                  • Performs related duties as required or assigned.

                  Minimum Qualifications:

                  • A bachelor's degree in computer science or a closely related field or the equivalent combination of education, training and experience from which comparable skills and abilities may be acquired;
                  • Enthusiasm for writing clean, modular, well-tested code.

                  Desirable Qualifications:

                  • Experience with working effectively as part of a small software development team, including close collaboration, distributed version control, and automated testing;
                  • Experience with building web-based applications, both server-side and client-side components, particularly with html5 and JavaScript and/or CoffeeScript;
                  • Experience with at least one JVM language such as Java;
                  • Experience with Scala programming, or enthusiasm for learning it;
                  • Experience designing and working with GUIs, including the Swing toolkit;
                  • Experience with Haskell, Lisp, or other functional languages;
                  • Interest in and experience with programming language implementation, functional programming, and metaprogramming;
                  • Experience with GUI design; language design and compilers; Interest in and experience with computer-based modeling and simulation, especially agent-based simulation;
                  • Interest in and experience with distributed, multiplayer, networked systems like HubNet;
                  • Experience working on research projects in an academic environment;
                  • Experience with open-source software development and supporting the growth of an open-source community; experience with Unix system administration;
                  • Interest in education and an understanding of secondary school math and science content.

                  The Northwestern campus is in Evanston, Illinois on the Lake Michigan shore, adjacent to Chicago and easily reachable by public transportation.

                  Get information on how to apply for this position.

                  March 05, 2014 07:12 PM

                  Quoi qu'il en soit

                  February 24, 2014

                  scala-lang.org

                  Scala 2.10.4-RC3 is now available!

                  We are very happy to announce the third release candidate of Scala 2.10.4! If no serious blocking issues are found this will become the final 2.10.4 version.

                  The release is available for download from scala-lang.org or from Maven Central.

                  The Scala team and contributors fixed 31 issues since 2.10.3!

                  In total, 39 RC1 pull requests, 12 RC2 pull requests and 3 RC3 pull requests were merged on GitHub.

                  Known Issues

                  Before reporting a bug, please have a look at these known issues.

                  Scala IDE for Eclipse

                  The Scala IDE with this release built right in is available through the following update-site:

                  Have a look at the getting started guide for more info.

                  New features in the 2.10 series

                  Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

                  Experimental features

                  The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

                  A big thank you to all the contributors!

                  #Author
                  26<notextile>Jason Zaugg</notextile>
                  15<notextile>Adriaan Moors</notextile>
                  5<notextile>Eugene Burmako</notextile>
                  3<notextile>A. P. Marki</notextile>
                  3<notextile>Simon Schaefer</notextile>
                  3<notextile>Mirco Dotta</notextile>
                  3<notextile>Luc Bourlier</notextile>
                  2<notextile>Paul Phillips</notextile>
                  2<notextile>François Garillot</notextile>
                  1<notextile>Mark Harrah</notextile>
                  1<notextile>Vlad Ureche</notextile>
                  1<notextile>James Ward</notextile>
                  1<notextile>Heather Miller</notextile>
                  1<notextile>Roberto Tyley</notextile>

                  Commits and the issues they fixed since v2.10.3

                  Issue(s)CommitMessage
                  SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
                  SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
                  SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
                  SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
                  SI-6443, SI-81431baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
                  SI-81529df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
                  SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  SI-7912006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
                  SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  Complete commit list!

                  shaTitle
                  5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
                  8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
                  d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile>
                  806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
                  ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
                  cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile>
                  1baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
                  9df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
                  c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
                  255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
                  2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
                  47562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
                  006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  27a3860<notextile>Update README, include doc/licenses in distro</notextile>
                  139ba9d<notextile>Add attribution for Typesafe.</notextile>
                  e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
                  dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
                  f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
                  5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  8d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
                  7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
                  3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
                  3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
                  04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
                  1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
                  31ead67<notextile>IDE needs swing/actors/continuations</notextile>
                  852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
                  40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
                  ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
                  d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  6045a05<notextile>Fix completion after application with implicit arguments</notextile>
                  075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  7b56021<notextile>Disable tests for SI-7020</notextile>
                  8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
                  2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
                  733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  February 24, 2014 11:00 PM

                  Functional Jobs

                  functional software developer at OpinionLab (Full-time)

                  functional software developer OpinionLab is seeking a Software Developer with strong agile skills to join our Chicago, IL based Product Development team in the West Loop.

                  As a member of our Product Development team, you will play a critical role in the architecture, design, development, and deployment of OpinionLab’s web-based applications and services. You will be part of a high-visibility agile development team empowered to deliver high-quality, innovative, and market leading voice-of-customer (VoC) data acquisition and feedback intelligence solutions. If you thrive in a collaborative, fast-paced, get-it-done environment and want to be a part of one of Chicago’s most innovative companies, we want to speak with you!

                  Key Responsibilities include:

                  • Development of scalable data collection, storage, processing & distribution platforms & services.
                  • Architecture and design of a mission critical SaaS platform and associated APIs.
                  • Usage of and contribution to open-source technologies and framework.
                  • Collaboration with all members of the technical staff in the delivery of best-in-class technology solutions.
                  • Proficiency in Unix/Linux environments.
                  • Work with UX experts in bringing concepts to reality.
                  • Bridge the gap between design and engineering.
                  • Participate in planning, review, and retrospective meetings (à la Scrum).

                  Desired Skills & Experience:

                  • BDD/TDD, Pair Programming, Continuous Integration, and other agile craftsmanship practices
                  • Desire to learn Clojure (if you haven’t already)
                  • Experience with both functional and object-oriented design and development within an agile environment
                  • Polyglot programmer with mastery of one or more of the following languages: Lisp (Clojure, Common Lisp, Scheme), Haskell, Scala, Python, Ruby, JavaScript
                  • Experience delivering real-time, distributed systems in large scale production environments
                  • Knowledge of one or more of: AWS, Lucene/Solr/Elasticsearch, Storm, Chef
                  • Familiarity with Java, Clojure, Ruby and/or Python ecosystems Database experience, including but not limited to RDBMSs like PostgreSQL, Oracle, etc.
                  • Experience with design and development of externally facing RESTful APIs
                  • Familiarity with message-based (RabbitMQ, 0MQ or similar), asynchronous, and event-driven architectures
                  • Ability to thrive in informal and relaxed environments
                  • Fluency with DVCSs like Git/GitHub and/or Bitbucket
                  • Ruby on Rails development (version 3+)
                  • Experience with JS and CSS compiling, linting, minifying, cache busting
                  • Experience with Cross-Browser, responsive Design meeting Accessibility Standards (508, WCAG)
                  • Knowledge of one or more modern CSS frameworks (e.g., Bootstrap, Foundation, Bourbon, Neat)
                  • Knowledge of one or more modern JavaScript frameworks (Backbone, Ember, Angular, Knockout, MVC)

                  Compensation:

                  • Commensurate with experience.
                  • Generous benefits include medical, dental, life and disability insurances, paid holidays, vacation and sick days, 401K with employer match, FSA plan

                  Get information on how to apply for this position.

                  February 24, 2014 05:16 PM

                  Platform Engineer at Signal Vine LLC (Full-time)

                  About Signal Vine

                  Signal Vine, LLC is an exciting early-stage company with customers and revenue which is growing quickly and looking for our next technical hire. We are building an incredible company that helps solve social issues with technology. We recognize that the key to our growth and success is hiring great people. Our ideal candidate for this position is someone who has the enthusiasm for creative problem solving, but maintains the dedication and focus to achieve desired results.

                  We have built a communication platform for educational organizations to reach today's youth. Our platform combines text messaging with data analytics to deliver a highly personalized and interactive experience for students. Our platform helps educational organizations achieve their goals by allowing them to engage their students in a way that is natural and easy for both parties.

                  About the Job

                  Our stack is Haskell, Ruby, Git, Ubuntu and we are primarily looking for a Haskeller with a deep understanding of computer language development to help build the next generation of the Signal Vine platform. Your main focus will be building our custom DSL, you will also be expected to work on all aspects of the Signal Vine tech platform, including tasks such as maintaining our Ruby on Rails application as necessary, responding to customer support requests, and preparing demo content. Further, we expect you to maintain a holistic view of how technology can be used to further the Signal Vine vision.

                  You...

                  • Have built release quality software with Haskell
                  • Are able to create custom DSLs
                  • Can do self directed work
                  • Work well with others
                  • Are intellectually honest
                  • Can express technical concepts to a non-technical audience
                  • Hold a Bachelors Degree in Mathematics, Computer Science or related
                  • Are trust worthy and conscientious

                  It’d be cool if you...

                  • Have experience with Ruby, Scala, AngularJs
                  • Are interested in dev-ops and build automation
                  • Have used web frameworks to build one or more applications 
(i.e. RoR, NancyFx, Flask, Play, etc....)
                  • Can use CSS effectively
                  • Know Unix well
                  • Have public examples of projects you’ve completed
                  • Have published technically relevant articles, blog posts or books
                  • Maintain a social media presence that represents your technical interests and 
ability

                  We will...

                  • Pay a competitive salary including equity and health insurance
                  • Buy you a shiny new MacBook Pro
                  • Respect your work schedule and habits by focusing on results
                  • Offer you a chance to go on an exciting ride as the company grows

                  Get information on how to apply for this position.

                  February 24, 2014 03:03 PM

                  February 23, 2014

                  scala-lang.org

                  Google Summer of Code 2014

                  This year the Scala team applied again for Google Summer of Code, and we’re happy to announce that we have been approved to be mentoring organization!

                  What is Google Summer of Code

                  Google invites students to come up with interesting, non-trivial problems for their favourite open-source projects and work on them over the summer. Participants get support from the community, plus a mentor who makes sure they don’t get lost and that students meet their goals. Aside from the satisfaction of solving challenging problems, students get paid for their work. This is an incredible opportunity to get involved in the Scala community and get helpful support along the way.

                  How to get involved

                  First, have a look at our project ideas page. The ideas there are meant as suggestions to serve as a starting point. We expect students to explore the ideas in much more detail, preferably with their own suggestions and detailed plans on how they want to proceed. But don’t feel constrained by the provided list! We welcome any challenging project idea pertaining to Scala!

                  The best place to propose and discuss your proposals is our “scala-language” mailing list. This way you will get quickly responses from the whole Scala community. If you know of a potential mentor, it also might be a good idea to also include them in your discussion on the scala-language mailing list. If not, don’t be afraid to ask who you might be able to contact in your discussion on scala-language.

                  Previous Summer of Code

                  We encourage you to have a look at our Summer of Code 2010, 2011, 2012 and 2013 pages to get an idea of kind of projects we undertook in previous years.

                  Please join us!

                  February 23, 2014 11:00 PM

                  Daniel Sobral

                  Two Thirds

                  This is not my usual programming-related blog post. I decided to blog about books I have been reading.

                  I'm a long time fan of the Honor Harrington Series, a military science fiction series that draws on the spirit of 17~19 century naval series such as Horatio Hornblower or Aubrey-Maturin (from which sprang the movie Master and Commander: The Far Side of the World) . These days, however, there are enough secondary stories in that universe that stories advancing the main plot are rather hard to come by. Though, on the other hand, one could say that the original story has finally concluded, and what's going on now is a new story.

                  For a bit, I tried to turn to a follow up on what is possibly my favorite fantasy trilogy of books, The Deed of Paksenarrion. Elizabeth Moon returned to the series with Oath of Fealty, followed by other books, but they pale in comparison with the original, which was a quite believable, and somewhat moving, story of the daughter of a sheep farmer on the back beyond who becomes a paladin.

                  So, in despair, I tried searching for other stuff. First I came upon The Kingkiller Chronicles, feeling somewhat like the last one to know of it (and if you didn't know of it you should immediately get The Name of the Wind and The Wise Man's Fear). So, that's three books of which only two are written. This is fantasy, but, honestly, that's beside the point -- it is the prose and the attention to detail that make these books great reading.

                  Back to waiting, I looked around and found The Golden Threads Trilogy, a mix of fantasy and science fiction (though the latter mainly from the second book) story that's quite clever. I particularly love how everyone in the first book, Thread Slivers, has a different conception of what's going on and what other people want. It's highly amusing. The second book, Thread Strands, sadly decreases the fog of war factor, and leads to... well, I'll have to wait for the third book to get published to find out. Again.

                  As I waited, I noticed that the March Upcountry series by John Ringo was getting combo-treatment, with March Upcountry and March to the Sea being bundled in Empire of Man. It seems March to the Stars and We Few, the fourth and final book, will be out in a combo soon as well. Anyway, this is military science fiction pitting commandos against dinosaurs and spear-wielding aliens. What's not to like? :)

                  Now, after I re-read these books, I decided to search for other stuff by John Ringo, and came upon Black Tide Rising, a zombie series. This is one of the "realistic zombies" kind of series, where people aren't really zombies, just infected with a rabies-like virus. It tries to be realistic in the portrayal of how people survive and fight back as well, though its world is rather lighter than I feel is realistic. I don't mind though: I prefer more cheerful worlds, even in a zombie apocalypse, than what I think is realistic. :)

                  Anyway, I read Under a Graveyard Sky in a single day, then followed up with To Sail a Darkling Sea a little slower... but only because, damn!, that's it for now. And it's not even going to be a trilogy! As a bonus, the first book comes with a "zombie clearance playlist" -- nice! :)

                  by Daniel Sobral (noreply@blogger.com) at February 23, 2014 07:09 PM

                  Adrian King

                  A Completely Objective Observation

                  When you talk to Scala programmers and use the word “object”, they hear “composable parameterized namespace that is a first-class value”.

                  When use the word “object” in front of Haskell programmers, they hear “Voldemort”.

                  by Archontophoenix (noreply@blogger.com) at February 23, 2014 02:10 AM

                  February 14, 2014

                  Paul Chiusano

                  The reactive manifesto is 'not even wrong'

                  I am sure the authors and signers of the reactive manifesto are well-meaning people, but the entire tone and concept of the site is just wrong.

                  On a technical level, though, the reactive manifesto is too vague to critique. I could try to interpret and respond to some of the vague claims that seem wrong or silly (for instance, I detect some confusion between an asynchronous API and an asynchronous implementation), but then I fully expect defenders to define away the criticism or say I've misinterpreted things. Its arguments hold up only by being not even wrong.

                  When you cut through the bullshit, it seems the only actual information content are inane or tautological statements like "being resillient to failure is good" and "one shouldn't waste OS threads by blocking". Do we really need a manifesto to tell us these things? Of course not.

                  But the point of the reactive manifesto is not to make precise technical claims. Technical arguments don't need manifestos or rallying cries. Imagine the ridiculousness of creating quicksortmanifesto.org to rally people around O(n*log n) sorting algorithms as opposed to O(n^2) algorithms.

                  No, the reactive manifesto is a piece of pop culture, which I mean in the sense used by Alan Kay:

                  Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

                  In the reactive manifesto, one is invited to join a movement and rally around a banner of buzzwords and a participatory, communal cloud of vagueness. Well, I don't want to join such a movement, and the pop culture and tribalism of our industry is something I'd like to see go away.

                  I would welcome some interesting precise claims and arguments (that aren't inane truisms) about how to build robust large systems (there may even be the seeds of some nuggets of truth somewhere in the reactive manifesto). But let's not make it a manifesto, please!

                  Update: I recently received a note from a recruiter, which contained the following gem:

                  We checked out your projects on GitHub and we are really impressed with your Scala skills. Interested in solving hard problems? Does designing and building massively scaling, event-driven systems get you excited? Do you believe in the reactive manifesto? Let’s talk.

                  </body>

                  by Paul Chiusano (noreply@blogger.com) at February 14, 2014 08:27 PM

                  February 12, 2014

                  Kris Nuttycombe

                  The Abstract Future

                  This post is being resurrected from the dustbin of history - it was originally posted on the precog.com engineering blog, which has since been lost to acquisition and bitrot. My opinion on the applicability of the following techniques has changed somewhat since I originally wrote this post; I will address this in a followup post in the near future. Briefly, though, I believe that parameterizing each method with an implicit monad constraint is preferable where possible; it provides the user with greater flexibility.

                  In our last blog post on Precog development, Daniel wrote about how we use the Cake Pattern to structure our codebase and to leave the implementation types abstract as long as possible. As he showed in that post, this is an extremely powerful concept; by keeping a type existential, values of that type remain opaque to any modules that aren’t “aware” of the eventual type chosen, and so are prevented by the compiler from breaking encapsulation boundaries.

                  In today’s post, we’re going to extend this notion beyond types to handle type constructors, and in so doing will show a mechanism that allows us to switch out entire models of computation.

                  If you’ve been working with Scala for any length of time, you’ve undoubtedly heard the word “monad” floating around in one context or another, perhaps in a discussion about the syntactic sugar provided by Scala’s ‘for’ keyword or a blog post discussing how the Option type can be used to avoid the pitfalls of null references. While a significant amount of discussion of monads in Scala focuses on the “container” types, a few types common in the Scala ecosystem display a more interesting facet of monadic composition – delimited computation. While all monadic types exhibit this in composition, perhaps the most commonly used monadic type in Scala that exemplifies this sort of use directly is akka.dispatch.Future, (which is scheduled to replace Scala’s current Future interface in the standard library in Scala 2.10) which encodes asynchronous computation. It embodies the aspect of monadic composition that we’re most concerned with here by providing a flexible way to order the steps of a computation.

                  I’d like to step back a moment here and state that this post isn’t intended to function as a monad tutorial; there are numerous (perhaps too many!) articles about monads, and their relevance to programming in Scala exist elsewhere. If you’re new to the concept it will be useful for you to take advantage of one or more of these resources before continuing here. It is, however, important to point out at first that the use of monads in Scala, while pervasive (as evidenced by the presence of ‘for’ as syntactic sugar for monadic composition) is somewhat idiosyncratic in that the Scala standard libraries actually provide no Monad type. For this, we have to look outside of the standard library to the excellent scalaz project. Scalaz’s encoding of the monadic abstraction relies upon the implicit typeclass pattern. The base Monad type is shown here, simplified, for reference:


                  trait Monad[M[_]] {
                  def point[A](a: => A): M[A]
                  def bind[A, B](m: M[A])(f: A => M[B]): M[B]
                  def map[A, B](m: M[A])(f: A => B): M[B] = bind(m)(a => point(f(a)))
                  }

                  You’ll note that the Monad trait is not parameterized by a specific type, but instead a type constructor of one argument. The methods defined inside of Monad are then parametrically polymorphic, which means that they must provide a specific type to be inserted into the “hole” at the invocation point. This will be important later, when we talk about how to actually take advantage of this abstraction.

                  Scalaz provides implementations of this type for most of the monadic types in the Scala standard library, as well as several more sophisticated monadic types, which we’ll return to in a moment. For now, however let’s talk a bit about Akka’s Futures.

                  An Akka Future represents a computation whose value is produced asynchronously, and which may fail. Also, as I noted before, akka.dispatch.Future is monadic; that is, it is a type for which the Monad trait above can be trivially implemented and which satisfies the monad laws, and so it provides an extremely useful primitive for composing asynchronous computations without all sorts of tedious mucking about with manual management of threads and shared mutable state. At Precog, we use Futures extensively, both in a direct fashion and to allow us a composable way to interact with subsystems that are implemented atop Akka’s actor framework. Futures are arguably one of the best tools we have for reining in the complexity of asynchronous programming, and so our many of our early versions of APIs in our codebase exposed Futures directly. For example, here’s a snippet of one of our internal APIs, which follows the Cake pattern as described previously.

                  trait DatasetModule {
                  type Dataset

                  trait DatasetLike {
                  /** The members of this dataset will be used to determine what sets to
                  load, and the resulting sets will be unioned together */
                  def load: Future[Dataset]

                  /** Sorts the dataset by the specified value function. */
                  def sort(sortBy: /*...*/): Future[Dataset]

                  /** Retains a prefix of this dataset. */
                  def take(size: Int): Dataset

                  /** Map members of the dataset into the A type using the specified value
                  function, then combine using the resulting monoid */
                  def reduce[A: Monoid](mapTo: /*...*/): Future[A]
                  }
                  }

                  The Dataset type here is something of a strawman, but is loosely representative of the type that we use internally to represent an intermediate result of a computation - a lazy data structure with a number of operations that can be used to manipulate it, some of which may involve actually evaluating a function over the entire dataset and which may involve I/O, distributed evaluation, and asynchronous computation. Based on this interface, it’s easy to see that evaluation of some query with respect to a dataset might involve a load, a sort, taking a prefix, and a reduction of that prefix. Moreover, such an evaluation will not rely upon anything except the monadic nature of Future to compose its steps. What this means is that from the perspective of the consumer of the DatasetModule interface, the only aspect of Future that we’re relying upon is the ability to order operations in a statically checked fashion; the sequencing, rather than any particular semantics related to Future’s asynchrony, is the relevant piece of information provided by the type. So, the following generalization becomes natural:

                  trait DatasetModule[M[+_]] {
                  type Dataset
                  implicit def M: Monad[M]

                  trait DatasetLike {
                  /** The members of this dataset will be used to determine what sets to
                  load, and the resulting sets will be unioned together */
                  def load: M[Dataset]

                  /** Sorts the dataset by the specified value function. */
                  def sort(sortBy: /*...*/): M[Dataset]

                  /** Retains a prefix of this dataset. */
                  def take(size: Int): Dataset

                  /** Map members of the dataset into the A type using the specified value
                  function, then combine using the resulting monoid */
                  def reduce[A: Monoid](mapTo: /*...*/): M[A]
                  }
                  }

                  and, of course, down the road some concrete implementation of DatasetModule will refine the type constructor M to be Future:

                  /** The implicit ExecutionContext is necessary for the implementation of 
                  M.point */
                  class FutureMonad(implicit executor: ExecutionContext) extends Monad[Future] {
                  override def point[A](a: => A): Future[A] = Future { a }
                  override def bind[A, B](m: Future[A])(f: A => Future[B]): Future[B] =
                  m flatMap f
                  }

                  abstract class ConcreteDatasetModule(implicit executor: ExecutionContext)
                  extends DatasetModule[Future] {
                  val M: Monad[Future] = new FutureMonad
                  }

                  In practice, we may actually leave M abstract until “the end of the universe.” In the Precog codebase, the M type will frequently represent the bottom of a stack of monad transformers including StateT, StreamT, EitherT and others that the actual implementation of the Dataset type depends upon.

                  This generalization has numerous benefits. First, as with the previous examples of our use of the Cake pattern, consumers of the DatasetModule trait are completely and statically insulated from irrelevant details of the implementation type. An important such consumer is a test suite. In a test, we probably don’t want to worry about the fact that the computation is being performed asynchronously; all that we care about is that we obtain a correct result. If our M is in fact at the bottom of a transformer stack, we can trivially replace it with the identity monad and use the “copointed” nature of this monad (the ability to “extract” a value from the monadic context). This allows us to build a similarly generic test harness:

                  /** Copointed is available from scalaz as well; reproduced here for clarity */
                  trait Copointed[M[_]] {
                  /** Extract and return the value from the enclosing context. */
                  def copoint[A](m: M[A]): A
                  }

                  trait TestDatasetModule[M[+_]] extends DatasetModule {
                  implicit def M: Monad[M] with Copointed[M]

                  //... utilities for test dataset generation, stubbing load/sort, etc.

                  }

                  For most cases, we’ll use the identity monad for testing. Suppose that we’re testing the piece of functionality described earlier, which has computed a result from the combination of a load, a sort, take and reduce. The test framework need never consider the monad that it’s operating in.

                  import scalaz._
                  import scalaz.syntax.monad._
                  import scalaz.syntax.copointed._

                  class MyEvaluationSpec extends Specification {
                  val module = new TestDatasetModule[Id] with ConcreteDatasetModule[Id] {
                  val M = Monad[Id] // the monad for Id is copointed in Scalaz.
                  }

                  “evaluation” should {
                  “determine the correct result for the load/sort/take/reduce case” in {
                  val loadFrom: module.Dataset = //...
                  val expected: Int = //...

                  val result = for {
                  ds
                  sorted - ds.sortBy(mySortFun)
                  prefix = sorted.take(10)
                  value - prefix.reduce[Int]myCountFunc)
                  } yield value

                  result.copoint must_== expected
                  }
                  }
                  }

                  In the case that we have a portion of the implementation that actually depends upon the specific monadic type (say, for example, that our sort implementation relies on Akka actors and the “ask” pattern under the hood, so that we’re using Futures) we can simply encode this in our test in a straightforward fashion:

                  abstract class TestFutureDatasetModule(implicit executor: ExecutionContext)
                  extends TestDatasetModule[Future] {
                  def testTimeout: akka.util.Duration

                  object M extends FutureMonad(executor) with Copointed[Future] {
                  def copoint[A](m: Future[A]): A = Await.result(m, testTimeout)
                  }
                  }

                  Future is, of course, not properly copointed (since Await can throw an exception) but for the purposes of testing (and testing exclusively) this construction is ideal. As before, we get exactly the type that we need, statically determined, at exactly the place that we need it.

                  In practice, we’ve found that abstracting away the particular monad that our code is concerned with has aided tremendously with keeping the concerns of different parts of our codebase well isolated, and ensuring that we’re simply not able to sidestep the sequencing requirements that are necessary to make a large, functional codebase work together as a coherent whole. As an added benefit, many parts of our application that were not initially designed thinking in terms of parallel execution are able to execute concurrently. For example, in many cases we’ll be computing a List[M[...]] and then using the “sequence” function provided by scalaz.Traverse to turn this into an M[List[...]] - and when M is future, each element may be computed in parallel, with the final sequenced result becoming available only when all the computations to produce the members of the list are complete. And, ultimately, even this merely touches the surface of a deep pool of composing our computation that is made possible by making this abstraction.

                  by Kris Nuttycombe (noreply@blogger.com) at February 12, 2014 05:27 PM

                  February 03, 2014

                  scala-lang.org

                  Scala 2.10.4-RC2 is now available!

                  We are very happy to announce the second release candidate of Scala 2.10.4! If no serious blocking issues are found this will become the final 2.10.4 version.

                  The release is available for download from scala-lang.org or from Maven Central.

                  The Scala team and contributors fixed 23 issues since 2.10.3!

                  In total, 39 RC1 pull requests and 12 RC2 pull requests were merged on GitHub.

                  Known Issues

                  Before reporting a bug, please have a look at these known issues.

                  Scala IDE for Eclipse

                  The Scala IDE with this release built right in is available through the following update-site:

                  Have a look at the getting started guide for more info.

                  New features in the 2.10 series

                  Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

                  Experimental features

                  The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

                  A big thank you to all the contributors!

                  #Author
                  21<notextile>Jason Zaugg</notextile>
                  15<notextile>Adriaan Moors</notextile>
                  4<notextile>Eugene Burmako</notextile>
                  3<notextile>Simon Schaefer</notextile>
                  3<notextile>Mirco Dotta</notextile>
                  3<notextile>Luc Bourlier</notextile>
                  2<notextile>Som Snytt</notextile>
                  2<notextile>Paul Phillips</notextile>
                  1<notextile>Mark Harrah</notextile>
                  1<notextile>Vlad Ureche</notextile>
                  1<notextile>James Ward</notextile>
                  1<notextile>Heather Miller</notextile>
                  1<notextile>Roberto Tyley</notextile>
                  1<notextile>François Garillot</notextile>

                  Commits and the issues they fixed since v2.10.3

                  Issue(s)CommitMessage
                  SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  SI-7912006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
                  SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  Complete commit list!

                  shaTitle
                  c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
                  2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
                  5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
                  bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
                  255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
                  c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
                  7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
                  a12dd9c<notextile>Test demonstrating SI-8085</notextile>
                  3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
                  2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
                  47562e7<notextile>Revert "SI-6426, importable _."</notextile>
                  f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
                  9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
                  006e2f2<notextile>SI-7912 Be defensive calling `toString` in `MatchError#getMessage`</notextile>
                  bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
                  27a3860<notextile>Update README, include doc/licenses in distro</notextile>
                  139ba9d<notextile>Add attribution for Typesafe.</notextile>
                  e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
                  dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
                  f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
                  5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
                  c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
                  fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
                  8d74fa0<notextile>[backport] SI-7439 Avoid NPE in `isMonomorphicType` with stub symbols.</notextile>
                  9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
                  3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
                  7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
                  7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
                  02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
                  652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
                  b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
                  0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
                  3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
                  696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
                  b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
                  053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
                  eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
                  3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
                  04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
                  ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
                  1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
                  31ead67<notextile>IDE needs swing/actors/continuations</notextile>
                  852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
                  40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
                  ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
                  d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
                  6045a05<notextile>Fix completion after application with implicit arguments</notextile>
                  075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
                  e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
                  50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
                  ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
                  e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
                  2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
                  25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
                  7b56021<notextile>Disable tests for SI-7020</notextile>
                  8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
                  2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
                  ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
                  733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

                  February 03, 2014 11:00 PM