A little less than two months ago I moved the Silverlight and DLR integration (AgDLR) project to http://github.com/jschementi/agdlr, in an effort to get more collaboration. Since then Dan Eloff and Mark Ryall have sent me pull requests, fixing up various error reporting, threading, and HTML bugs. Thanks guys! I promise to pull from you soon!
Anyway, a little while ago, Mark said:
I don't want to get all Captain Agile on you so soon but what do you think I can possibly add in the way of tests to verify that the change doesn't break anything?
Yeah, not having tests kind of sucks, doesn't it? Unfortunately, testing in Silverlight is a windy, confusing, one-lane road, so getting the infrastructure to run tests has taken some time, but I just committed a Silverlight spec runner for bacon (a little Ruby spec framework) and a bunch of specs for Microsoft.Scripting.Silverlight!
Here's a short screencast walking you through running the tests:
Testing AgDLR from Jimmy Schementi on Vimeo.
So, the tests are written in Ruby, but they test the C# code in Microsoft.Scripting.Silverlight ... pretty cool. This is a deviation from my common belief of "test in the language you write the code in", but so what ... I never cared for that way of thinking =)
This test runner can be copied and used to test any Silverlight code; just place your test file in the ruby/test directory and update the test list in ruby/app.rb, and that's all their is to it. I'll probably pull it out into it's own git repository, but for now it's part of AgDLR.
What took so long?
Yeah yeah, two months without tests is crazy, especially since these tests were just written in the last two weeks! This being Microsoft and all, with an entire discipline devoted to testing, you'd think we have a ton of automated tests to verify our Silverlight integration. We'll we do. Kind of. Let me explain.
Currently AgDLR's tests are mostly in-browser tests, with some "sanity" tests running on the command-line. The command-line runner (which I can't release to the public) hosts CoreCLR, but doesn't work exactly like the managed environment in Silverlight, which is why most are in-browser. The in-browser tests are comprised with a lot of "does this feature of Silverlight work in the DLR" tests, as well as more useful the IronPython and IronRuby test suite, and some end-to-end tests of sample applications (DLRConsole, Clock, etc). They run on an internal version of Microsft.Silverlight.Testing, which stop me from just releasing it, again.
Today these tests mainly run in a check-in system (the infamous SNAP, aka "the troll"), which runs tests in parallel over a bunch of machines. In SNAP, the Silverlight tests can take 30+ minutes themselves. On my laptop it takes about 5 hours! Basically useless for anyone wanting to contribute to AgDLR and run tests.
Yeah, that's how it makes me feel too. Why they are like this is an entire different, and possibly inappropriate post, but I'll summarize. The issue is two fold:
(1) The browser is killed after each test
Yeah, you heard me right. And each test has to run in both Firefox and IE. The tests are launched from a custom test harness, which launches the appropriate browser instance, and waits for the browser to say whether the test passed or failed, and gets any extra information like stack traces, error messages, etc. Sometimes this completion detection happens quick, but other times a failure never happens, so the test must timeout (60 seconds), and then it re-runs itself three times to make sure the timeout wasn't some fluke. Definitely no failing-fast happening here. Much of this is fixed in a newer version of Microsoft.Silverlight.Testing, but this is what's running today.
(2) The tests aren't unit tests
The tests exercise Silverlight features from DLR languages, rather than testing the integration between the two. Actually, the DynamicApplication class is never used in the tests!
This makes a good point about testing in general. While end-to-end/sign-off/acceptance tests are wonderful things, they can't be all the tests. If I can't run the tests while developing, then I'm not going to run them period. Having a check-in system run them for you is fine, but I'd like to have a good chance at passing a 1.5 hour job the first time.
Solution: ditch it all
To get tests out to the world, the only sane thing would be to head back to the drawing board, which is what I've done with test/runner. Hopefully by using existing open source projects, a testing paradigm most people understand, and a very small test runner, this testing solution for AgDLR will be kept small and friendly for developers.
So, now that there's a sane way to verify changes to AgDLR don't break, please fork AgDLR and send me pull requests! Next post will probably be about adding continuous integration to AgDLR, so stay tuned.