A couple years ago we presented a couple design sketches for a code coverage tool for clojure. More recently we spent some time researching whether existing code coverage tools would suffice for our requirements, and after finding out that java based code coverage tools either don’t work at all, or produce unhelpful output, we decided to finally write cloverage. You can find it on github: https://github.com/lshift/cloverage.
To try it out, add the lein-cloverage plugin to your user profile in ~/.lein/profiles.clj:
lein cloverage in your project root.
It’s based on a prototype one of our commenters mentioned on Tim’s post. Thanks Mike!
When I wrote my Smalltalk deriving-with-parsers library, I ran into an issue with compaction: cycles in the parser. Self-referencing parsers (corresponding to left- and right-recursive rules) occur naturally, so I couldn’t hide from the problem. I investigated two ways to introduce circularity as well as how to compact these graphs: delegates, and “sutures”.
Seriously exciting stuff is happening to the way stories are being told on the web beyond a standard blog roll.
There are already some really good work on the web that showcase the ways a story can be brought alive through visual effects and interactive elements by combining bits HTML5, JS and CSS3 — you should just check out these web comics based on parallax: Never Mind the Bullets and Jess & Russ. I’d say both of these look like they are the fruits of many hours of hard work and are each rightfully a piece of art in themselves. There will only be more of these as time goes on.
Enums are a way of encoding a set of ordinal values in a type system. That is, they formalise the notion that a value may be one of a small set of specific values. We’ve had them since at least the 1970s. They’re really useful. So why might they not always be the right tool?
When your application is based on Spring it makes a lot of sense to fire up a Spring context within your integration tests and functional tests.
For a particular Scala-based project it was necessary to manage not only the lifetime of the Spring context, but also the lifetime of an annotation-based REST library component called Jersey, which works together with Spring.
I did the Coursera Natural Language Processing course at the beginning of the year. Apart from the introduction to probability it gave me, the thing that sticks most in my mind comes from one of the exercises. In the exercise we had to define a probabilistic parser to parse (an extremely limited subset of) English. That’s not the fun bit though. The fun bit was using the information in the parser to generate “English” sentences. The idea of a grammar generating sentences is right in the standard way of defining languages, but for some reason it hadn’t occured to me to actually build such a thing.
The idea’s simple enough: complicated parsers (language generators) are made up of simpler parsers (language generators) until you hit trivial parsers (language generators).
Of course we don’t want just one sentence, so it makes sense to build streams (we’ll use my favourite stream implementation, Xtreams) so that we may – if desired – generate as many different sentences as we’d like.
I had hoped Nate Silver was going to announce explicitly that this was his final pre-election prediction, but less than three and a half hours to go before the first polls close, I think there’s not much time to make another one. I’ve updated the battleground chart with his predictions, and I’ll update it as polls are called until I fall asleep. Let me know if you find this useful – it’s certainly the only way I can tell what it means when they call a state!
I’m currently documenting an application that uses RabbitMQ extensively. I want to show the routing topology within the broker, but I want the bulk generated automatically because there are a lot of entities to deal with. Given that I can export the broker definitions into JSON, it seemed like it would be fairly straightforward to generate something using Graphviz.