My issue is that the software I'm writing has to take timing into consideration when doing its calculations: it's a program that analyzes stock behaviour. For example, if I call foo() twice with 2 seconds in between, the result might be different than if I call foo() twice with 4 seconds in between, even if all the inputs are exactly the same. What I want to do is make an automated testing system to verify that the code is doing what it is supposed to be doing.
The first step is to pull out any timing related code into a separate module, so that when running an automated test the script that is supposed to do something over the course of 30 minutes doesn't actually have to wait 30 minutes for the test to pass. Instead, there is a test module for timing that will tell the program that 30 minutes has passed, even if it's only been a few milliseconds.
This also means that the program can't use the traditional timing methods available (since the code is in .NET, that means no System.Timers.Timer or DateTime.Now). This makes things a little bit tricky.
My first thought to try and test this sort of stuff was to use a script like this:
call foo() simulate 5 second pause call foo() assert that result is as expectedThe main issue with this is that while the test process does simulate time passing, it does not send the program data during that time. An example of something that wouldn't be easily testable in this case is if I have a program that says, "if the data received over the last minute follows pattern X, do Y." In this example the program is active during the entire minute, it is just watching and waiting for something interesting to happen.
My second thought is what I am thinking of implementing now: I basically have a CSV file for the data. The first column is the time passed since the last amount of data is received, in milliseconds. The other are just arbitrary data, with the first row being a name given to the data. For example I'll have something like this:
Time Bid BMO Ask BMO 0 61.73 61.75 500 61.73 61.76 500 61.74 61.77Then to trigger a test, I have something called a checkpoint. When a checkpoint is hit, it is telling the system to trigger a series of tests. Those tests are run, the results will be reported, and any errors/exceptions thrown will halt this particular test. After that the data will continue until the end of the file is hit.
To me it seems like the testing system is much more data-driven than a normal testing suite, so I'm not entirely certain I'm doing it the right way. However at the same time the nature of the data is not quite the same as with other software, so maybe this sort of approach is a good one. What do you guys think?
1 comment:
I'm not a .net guy, but in Ruby I use mocking to do stuff like that.
A mock means that you're overriding the output of a function (it can even be a system function) and forcing it to return a given value in test mode. You don't have to rewrite your code to get around it, this is all done by the mocking framework.
I use it in my project to override the current date, it works well.
I googled for ".net mocking" and found lots of references to packages, including a stack overflow discussion of the best approach.
Post a Comment