...
Original Type | Translated Type | Question Text | Points | Answers | Feedback |
---|---|---|---|---|---|
Multiple Choice/Multiple choice single choice (mcsc) | Defaulted to Essay type | Text imported correctly | Point value correct | Defaulted to individual Model Short Answer records - only the last displayed | Feedback was not saved |
True-False (TF) | Correctly typed | Text imported correctly | Point value Value correct | true and false answers displayed; correct answer not selected | Feedback was not saved |
Essay | Correctly typed | Text imported correctly | Value incorrect - original of 5.0 not saved | n/a to translation | Feedback was not saved |
Matching | |||||
Fill-in-Blank | |||||
Multiple Response (Grading Method: Right Less Wrong) | |||||
Multiple Response (Grading Method: All Points or None) | |||||
Algorithmic |
Of note here is that the feedback models for Samigo are more numerous and flexible than for Respondus. Samigo offers both question-level and answer-level incorrect and correct feedbacks, depending on the question type. Respondus offers only general feedback except for true-false. This will make for fewer required translation points.
Also of note here is that Samigo does not curently support algorithmic question types. Respondus exports this question type as a .swf or Flash movie and the question text is actually the JavaScript launch code and the xml answer output is similar to a multiple-choice single-correct answer.