Skip navigation
drjustin
Currently Being Moderated

Better Async testing?

Feb 1, 2010 8:27 PM

I've been doing a lot of async service testing lately and I'm trying to figure out the "correct" way of doing things.

 

Imagine I have a simple backend with just two methods: createTeam() and getAllTeams().  For example, I usually test the createTeam() method like this:

 

[Test(async)]
public function createTeam():void {
    var token:AsyncToken = service.createTeam('Los Angeles Lakers');
    token.addResponder(Async.asyncResponder(this, new TestResponder(createTeam2, fault), TIMEOUT));
}
public function createTeam2(data:Object, passThroughData:Object = null):void {
    var token:AsyncToken = service.getAllTeams();
    token.addResponder(Async.asyncResponder(this, new TestResponder(createTeam3, fault), TIMEOUT));
}
public function createTeam3(data:Object, passThroughData:Object = null):void {
    var teams:ArrayCollection = data.result as ArrayCollection;
    assertThat('Team not created', 'Los Angeles Lakers', inArray(teams.toArray()));
}

 

This pattern appears to be very common, where multiple methods are chained together via Async.asyncResponder.  I know it works, but does anyone have an opinion, or examples, of a better way to test async services?

 

-Justin

 
Replies
  • Currently Being Moderated
    Feb 2, 2010 6:40 AM   in reply to drjustin

    @drjustin - I'll throw in my 2 cents FWIW.  In your example I'm assuming there is a service class under test which has a dependency on something from the mx.rpc package (i.e. - HTTPService, RemoteObject, or WebService).  Typically when I write unit tests for service classes I like to stub out the Flex dependency with dummy data, so I can validate that my service method code is working as expected (e.g. - the code in Service#getAllTeams is doing what it's supposed) independent of its dependencies.  One option is to create your own extension of the mx.rpc class as an internal class in the test and then inject it to your service.  After writing a few unit tests for a service class you may find this gets tedious.  For this purpose, I created a set of stubs for HTTPService and RemoteObject that provide an simple interface to create stub data.  You can find more information on how to use them here:

     

       http://www.brianlegros.com/blog/2009/02/21/using-stubs-for-httpservice -and-remoteobject-in-flex/

     

    By using these, you can test each service method call independently w/o a reliance on integration testing like your example below.  If you have a service method that is more of a pass through method (i.e. - createTeam() just calls an http service and nothing else), a valid unit test could be written using mock objects to verify that othe dependencies besides the mx.rpc class are called correctly.  mock-as3, mockito-flex, and asmock are great option to consider.

     

    If you're interested in writing just an integration test, like you've shown below, you may want to consider putting everything together into one test method:

     

    [Test]

    public  function createTeam():void

    {

       var fault : Function = function(event : FaultEvent, passThroughData : Object) : void

          {

              Assert.fail("No fault should be thrown");

          };

     

       var verifyTeamAdded: Function = function(event : ResultEvent,  passThroughData : Object) : void

          {

             var  teams:ArrayCollection = event.result as ArrayCollection;
             assertThat('Team not created', 'Los Angeles Lakers',  inArray(teams.toArray()));

          };

     

       var allTeams : Function = function(event : ResultEvent, passThroughData : Object) : void

          {

              var token : AsyncToken = service.getAllTeams();

              token.addResponder(Async.asyncResponder(this,  new  TestResponder(verifyTeamAdded, fault), TIMEOUT));

          };

     

        var token:AsyncToken =  service.createTeam('Los Angeles Lakers');
         token.addResponder(Async.asyncResponder(this, new  TestResponder(allTeams, fault), TIMEOUT));

    }

     

    I find this leads to readability in the test.  I'd also suggest making the data parameter to each result handler function for each responder typed (e.g. - ResultEvent/FaultEvent) again for readability.  I wouldn't worry about reuse of code in tests too much; IMO tests should be islands to themselves short of the shared work done in your before and after statements.  Doing so makes regression testing that much more valuable, since you'll know that you're using objects configured just for that test and not reused out of convienence with possible side effects.  Literals are one of the major expections I find for reuse.  For literals, I follow the basic rule that its shared at the lowest scope visibility.  If I have a literal that is shared between test methods, then I definte it as a constant on the test class.  If it's reused between test classes, I typically create a TestConstants class and make the literal available as a static constant.

     

    In the end, most devs write integration tests because they're most intuitive test case we see initially.  Consider writing unit tests using stubs and mocks to not only get better coverage, but have a more granular test so regression testing has that much more value.  Sorry for the rant, but I hope it helps some.

     

    -Brian

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 2, 2010 10:59 AM   in reply to drjustin

    @drjustin - As far as the call chains go I can understand how they don't necessarily look the best, but due to the asynchronisity of the RPC calls, if you need to write an integration test with methods which return asynctoken, chaining is unfortunately the only way I know of to make sure everything happens in sequence.  There is the Async#proceedOnEvent, but I don't think it will prevent subsequent code in a test method from being executed before an event is returned.  Another suggestion would be to reconsider the contract exposed by your Service class and possibly consider exposing custom events rather than tapping into AsyncToken for testing.  If your service class have very little responsibility though, this may be more work than its worth.

     

    In terms of stubs, I think we're talking about the same thing.  Even though you use Swiz, you may find that the stub implementation I've  thrown together saves you a bit of coding if you're willing to give  them a shot.  I use Swiz as well, but I prefer to test my classes  independent of the framework to make testing dependencies simpler. This is just preference though, so to each their own.

     

    Regarding a thin client though, concerns (e.g. - serialization) do crop up that warrant unit tests on the methods of the service class itself rather than solely using an integration test, as you provided in your example.  When regression testing, it's always nice to have unit coverage, even for simple pass through methods, so that the implementation of that method changes or a dependency of that method changes, you'll "ideally" be alerted of it by failing tests.  Thick clients don't necessarily mean thick on the data layer of the application, but it's always a plus to have tests to fall back on.

     

    Again just my 2 cents.  Hope this helps to explain more where I was coming from.

     

    -Brian

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 2, 2010 12:10 PM   in reply to legrosb

    Take a look at the Sequencer classes.

     

    Here is a link from the fluint site as we haven't had time to replicate this content yet.

    http://code.google.com/p/fluint/wiki/Sequences

     

    I believe Sequences accomplish your goal. Right now they are built around the idea of setting some property(or calling a function) and waiting for a response. Eventually handling an assertion. Each of these types of steps, (action and waiting) are defined by an interface, so it would be pretty easy to make a type of step which called your service, for example.

     

    This may or may not be a better approach for you, but sequences exist specifically because chaining together async calls is tedious and ugly.

     

    Mike

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 7, 2010 9:36 PM   in reply to drjustin

    Anonymous functions aren't going to work. If you really want to do chaining look at sequences. Again, they are not intended to do precisely what you are trying to do, but you are going to have much better luck down this path than any other.

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 8, 2010 9:23 PM   in reply to Michael Labriola

    I ran through a couple demos on my machine and I think I was able to verify the runtime errors @drjustin was seeing from using locally declared functions defined within the same method scope.  From what I can tell, when the test method is run by its runner, the hooks for each handler are setup using the locally scoped functions; when the test method completes execution, potentially before the subsequent events have been fired, the handlers from the test method have already been released.  When the appropriate code attempts to call the handlers, they've already been cleaned up causing an error.  I believe this is what Mike was talking about in his last post.  I went on to mess with Sequences, but since your test relies on events dispatched from the AsynToken, which is returned from the service, rather than the service itself, I couldn't find a way to use the library to accomplish the goal of your original test.

     

    I saw some chatter about mx.rpc chaining on the Swiz mailing list a few days back, maybe they could provide a supplemental tool to make testing easier since you're using their framework:

     

       http://groups.google.com/group/swiz-framework/browse_thread/thread/644 736f797ab8fb7/2f91489f1dcd7593?lnk=gst&q=chaining

     

    Maybe Mike can chime in with more options, but from what I can tell, your original implemention may be the only way to successfully integration test your service (noting unit testing is still entirely possible if you wanted to provide a different level of granularity).  Sorry for the bum suggestion, I'm sure we can figure something out.  I'll keep digging around.

     

    FWIW -

     

    -Brian

     
    |
    Mark as:
  • Currently Being Moderated
    Feb 8, 2010 9:28 PM   in reply to legrosb

    the locally defined functions without metadata do not work by design. That is not how the framework is setup to work.

     

    Sequences are the correct answer. Sequences simply depend on classes that are either actions or pauses. So, an action might be call the server. Then it waits for a response before continuing.

     

    Right now the only steps in the framework are those that deal with local setting, such as setting a property or calling a method. However, the sequence infrastructure was designed to be extended as needed. The Sequencer simply cares about classes implementing the correct interfaces. Then it will gladly allow them to either be actions or pauses (waits) and will happily run in the way needed, with much less code.

     

    Mike

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 4, 2010 2:16 PM   in reply to drjustin

    Hi,

     

    First of all this does not seem like a best practice to cross the boundaries (you call the actual service right?) from the unit tests. You seem to be testing actual services from Flex. It might be easier to test them using a dedicated tool like SoapUI for webservices.

     

    If you still need to check how your flex code deals with the services I propose using a mocking framework to stub the network layer. One of the options might be mockito-flex.

     

    Also there is a 'morefluent' library that addresses verbosity of the async testing.

     

    Regards,

    Kris

     
    |
    Mark as:
  • Currently Being Moderated
    Mar 5, 2010 5:43 AM   in reply to drjustin

    After thinking about your issue a bit more and crashing my car, it made it clear to me that I had a better solution for you that we tried to adopt for some UI integration tests (like is IoC and binding configured right).

     

    Take a look here: http://bitbucket.org/loomis/morefluent/wiki/IntegrationTesting

     

    Briefly the idea is to use 'order' metadata to make sure tests are run in a specific order so that you can give up on the ugly cascade. I used morefluent to deal with asychronous callbacks in my example but I guess you can keep your original flexunit4 facilities.

     

    Cheers,

    Kris

     
    |
    Mark as:

More Like This

  • Retrieving data ...

Bookmarked By (0)

Answers + Points = Status

  • 10 points awarded for Correct Answers
  • 5 points awarded for Helpful Answers
  • 10,000+ points
  • 1,001-10,000 points
  • 501-1,000 points
  • 5-500 points