1 person found this helpful
Not out of the box, but there is a wonderful widget by Michael Lund, AKA cpguru that could be used:
I second Lilybiri's recommendation to use the Save and Load Data widgets.
However, one thing to be aware of:
Although you can definitely save the value of scoring variables from a project to these Shared Objects using the widget, you cannot necessarily add those values to the scoring variables of the next project because they are mostly READ ONLY variables in Captivate.
Depending on what you were trying to achieve, this might not be an issue. But if you were trying to build some kind of cumulative score to be then reported to your LMS at the end, it might not work because Captivate is only set up to report the score for the current module.
Displaying is fine.
I am actually trying to do an end-run around a (somewhat daft) limitation of Captivate to randomize questions in a quiz. Since it picks your 10 questions only once per instance, I am moving the quiz to another project that is called from the main one. When finished, it needs to come back to the point where it left and bring the score with it. Then the "Retake Quiz" button should launch a fresh instance each time with 10 new questions (that's the plan, anyway).
OK. This post is a bit of a vent, but here goes...
While I might agree that the way Captivate question pool randomisation works should only be ONE way to have it work, I disagree with the statement that this way is "daft". There is a very clear instructional logic to it.
If you keep throwing different questions at the user each time they retake the quiz you will get X number of users that will complain because they were EXPECTING to see the same questions again so that they could master those particular questions the next time around. Giving them totally different questions will just confuse some people, and if this means they flunk the course or the quiz, bet your life they'll complain about the issue being the way the course works...not the fact that they didn't pay close enough attention.
In a corporate environment, it only takes one or two people out of a thousand to complain about some small aspect of the way a course works and management will ask that this be changed to appease this miniscule minority. So having randomisation work the way it currently does makes the quiz questions a little more predictable for these users.
Another reason why the randomisation works this way is that the Interaction ID is tied to the specific Question Slide, not the Question Pool. So if you had different Interaction IDs beeing sent to the LMS over SCORM each time the user attempted the quiz, the likelihood that your scoring on the LMS end would be all out of whack is pretty high. The LMS might assume that the user was supposed to have done ALL of these questions, and divide their result over all of them, leading to a lower than normal score. So there would likely be technical issues. It's hard enough already getting LMSs to score reliably and predictably with anything other than a simple course structure.
Having said that, I'm VERY much in favor of seeing a future version of Captivate remove this limitation so that it becomes a checkbox somewhere to specify whether randomisation happens at each attempt rather than each module launch.