Throughout the course, you will be involved in a group project: conceptualizing, designing, implementing, testing, and refining a system that will evolve from a paper description, to a simple prototype, and finally to a functional system.
Each project team should consist of two or three students, although exceptions may be made in consultation with the instructor. The purpose of the project is to gain experience in applying concepts and techniques from the class to a real-world problem. What is most important to keep in mind is that your project marks are not determined by a wealth of functionality, nor by the complexity of what you tackle. Remember: this is a course on human-computer interaction.
Because you will be working with human subjects during the semester, you will need to sign and submit an agreement to ethical research conduct before you can begin your activities. You must also complete a consent form (also avaialable in French) for each participant with whom you work, and provide the participant with a copy of the form.
Note that whatever you choose to develop may already exist in some other form, in which case, one of your initial tasks in the first project deliverable will be to critique the existing version and identify what you consider to be its significant shortcomings that warrant you "re-inventing the wheel". Alternatively, your project idea may be inspired by a concept description, but not available in a functional implementation, or it may be something completely of your own invention. In any event, what we will be focusing on in our evaluation of your work is not the application itself, but rather, your ability to design, develop, evaluate, and refine an effective interface that supports a compelling level of interaction.
You are free to incorporate hardware components or re-use existing software, with proper acknowledgment.
Before you can get started on the first project deliverable, you must first "pitch" your idea by writing up a brief description of the problem.
This deliverable is intended to acquaint you with the user community for whom your group will be designing and developing a prototype technology. It will require, as a first step, observation of real users engaging in everyday activities. From your observations, you will develop personas and a use case scenario associated with the activities they perform.
In turn, the personas and scenarios will help you think and talk about how technology can be designed and developed to facilitate the users' tasks. As such, this part of the project is perhaps the most important, as it constitutes the basis for your work throughout the rest of the semester.
As you think about the problem and your proposed solution, keep in mind the high-level question we will be asking throughout the remainder of the project: "How will we judge the HCI quality of your solution?" Remember that the project is not intended to require massive engineering implementations.
You may find it useful to consult previous years' project notebooks to get a sense of expectations, both in terms of organization and depth of content, but please keep in mind that the projects may differ significantly year to year in terms of their themes and formal requirements.''Your project pitch should include:
The publicly posted pitches, along with the feedback to be provided by the instructor and course TAs, are intended to help you quickly form project teams, decide on the specifics of your project topic, and begin preparation of your first formal deliverable, the "observations and proposal", which will be submitted through your project notebook, described below.
All reports and information on your project are to be collected into a Web-based notebook, maintained separately by each project team. You may wish to review some of the notebooks from previous years for examples.
Each group will designate one "webmaster" who will be responsible for maintaining the notebook on a publicly accessible, password-protected, web server, using the following .htpasswd contents:
hciagent:$apr1$fP9MjEGP$Ub.pM22LTCJ1nCynP2lvK1
You should add passwords for your group members to this file as well, using the htpasswd command.
Assuming the HCI project is placed at the top-level of your server, you would then specify the following in an .htaccess file to ensure that only authorized users can access the pages:
AuthType Basic AuthName "HCI Project access restricted" AuthUserFile <filepath to .htpasswd file> require valid-user
Note that if the filepath is not absolute (an "absolute" filepath begins with a slash, e.g., /home/username/webpasswords), it is treated as relative to the web server root. If you don't know the web server root, it is therefore likely safest to use the absolute path specification.
The top-level URL for your notebook must be submitted by the deadline indicated on the course syllabus to ensure that you have an opportunity to verify the results of a test grab before the grab of your actual project proposals the next week. All of your project information must be stored under that URL. For important information about the web notebook see these helpful guidelines prepared by previous TAs. Be forewarned: just because your web pages look right in your account does not mean they will appear correctly when transferred, and this may result in you losing marks.
Each component of the project must be associated with a link from the main page of your notebook. Its content should be organized in such a manner that a concise overview is presented to a reader who only has approximately five minutes to gain an understanding of your work. You may provide as much supplemental material as you wish, linked to each entry, but the main entry must make sense on its own.
For each project deliverable, the process will work as follows:
Feedback is valuable because it helps groups understand why they are receiving a certain grade and it provides external perspective on the current state of their project. It is also intended to encourage greater reflection on the relevant HCI criteria during your assessment. Feedback can receive a score of 0 through 3, as illustrated in the following examples from the "Observation and Proposal" deliverable from a previous year's project. Feedback scores are scaled to account for 1/3 of the assessment grade.
Score |
Example |
Justification |
---|---|---|
0 |
*no comment* |
No feedback given (or unhelpful/irrelevant feedback) |
1 |
Overall, the proposal is clearly presented to the readers. High level design, the structure is pretty clear, but author could name the existing software or API more precisely. For justifying feasibility part, it could be better to mention which members are working on which parts. |
The feedback given is superficial and not meaningful for the group: Although three comments are provided, the feedback does not offer any insight about the project that the group can think about. Rather, it focuses on justifying grades given on certain sections of the rubric. |
2 |
|
Meaningful critiques are given, but not all of the assessed scores are justified: This feedback is meaningful because it gives the group being graded questions to think about when they are further developing their system. It also notes specific places where the group could improve, and identifies points of confusion from an outside perspective. It did not receive a 5 because it does not justify all the scores given in each rubric criteria. |
3 |
Observations:
Problem:
|
Meaningful critiques are given and there is justification for the scoring of each rubric criterion. |
This deliverable is intended to acquaint you with the user community for whom your group will be designing and developing a prototype technology. It will require, as a first step, observation of real users engaging in everyday activities. From your observations, you will develop personas and a use case scenario associated with the activities they perform.
In turn, the personas and scenarios will help you think and talk about how technology can be designed and developed to facilitate the users' tasks. As such, this part of the project is perhaps the most important, as it constitutes the basis for your work throughout the rest of the semester.
As you think about the problem and your proposed solution, keep in mind the high-level question we will be asking throughout the remainder of the project: "How will we judge the HCI quality of your solution?" Remember that the project is not intended to require massive engineering implementations.
The following rubric describes the assessment criteria by which your group's deliverable will be graded.
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
Observing users | Observe several users carrying out the activity you've chosen to focus on for your project. Describe your observations of these users as they carried out the specific activity. | 0: No observations or completely irrelevant observations. | 1: Observations of only one person and/or of an irrelevant activity to the proposed project. | 3: Observations of only two people and/or of an activity that is not of central importance to the proposed project. | 5: Observations of three or more people, involving an activity of central importance to the proposed project. |
Identifying the problem | Describe the interaction problem you observed or the opportunity for improved design that arises from your observations. Provide supporting material, e.g., brief and small file size video clips, photographs, or illustrations that demonstrate the interaction problem or an opportunity for improved design. | 0: No problem or opportunity for improved interaction identified. | 1: Only a trivial problem or opportunity for improved interaction was identified. | 3: A relevant problem or opportunity for improved interaction was identified but was not clearly explained in the description or illustrated by the supporting material. | 5: A relevant problem or opportunity for improved interaction was clearly identified, well explained, and illustrated by the supporting material. |
Developing personas | Based on your observations, formulate one or more personas (or "audience segments") that describe the different types of people who will benefit from the project you are proposing. Include the intermediate results of your work in transforming your observations into personas. | 0: No personas were provided. | 1: Only one persona was provided and was not clearly tied to the observations carried out. | 3: Two or more personas were provided, relevant to the observations, but the process of developing the personas was sometimes unclear. | 5: Two or more personas were provided, relevant to the observations, and the application of the task-based audience segments process to develop these personas was clearly illustrated. |
Illustrating a use case scenario | Provide a story about how one of your personas would perform their task with your proposed system. The scenario should describe the important user actions and system reactions (the expected interaction) in reasonable detail, and convincingly demonstrate the use of the proposed system to address the user's need. | 0: No use case scenario was provided. | 1: The use case scenario lacks important details and/or does not demonstrate how the proposed system addresses the user's need. | 3: The use case scenario provides adequate details, but the description of the expected interaction or how the proposed system addresses the user's need was poorly communicated, or left major questions unanswered. | 5: The use case scenario provides a comprehensive and informative description of user interaction with the proposed system and clearly explains how it addresses the user's need. |
Finding related products | Carry out some research to find what other solutions exist or have been attempted to solve the problem you're tackling. Describe these alternatives and their limitations. | 0: No related products were described. | 1: Only a single related product was described, but its limitations to address the user need were not made clear. | 3: More than one related product were described, but other important (and sufficiently different) alternatives were missed, or the description of their limitations left major questions unanswered. | 5: More than one related product were described, and the descriptions of their limitations were clear and informative. No important alternatives were ignored. |
Comparing products | Describe how the proposed solution differs from the related products and is superior to them for the target persona(s). | 0: No comparison with other products was provided. | 1: The comparison with other products was superficial and not meaningful for the described personas. | 3: The comparison with other products was reasonably communicated, but it was not entirely clear how the proposed solution was superior for the described personas. | 5: The superiority of the proposed solution over related products was clearly described and relevant to the described personas. |
Designing at a high level | Provide a clearly understandable architectural diagram of your proposed system, including any pre-existing software and hardware building blocks, needed to realize the functionality of the proposed system. | 0: No diagram or description of the building blocks of the proposed system were provided. | 1: The diagram was unclear and/or missing important blocks necessary to understand how the proposed system would function. | 3: The diagram was reasonably clear, but lacking sufficient detail for another team to build the proposed system. | 5: The diagram clearly described all relevant blocks and identified pre-existing software and hardware that would be used. Another team, with sufficient resources, could likely use the diagram as a high-level guide to build the proposed system. |
Justifying feasibility | Convince the reader that you will be able to build your system by the end of the semester. Explain what additional building blocks (software and hardware) need to be developed and how your team possesses the necessary skills to do so. Offer plausible justification for the feasibility of the estimated implementation effort, which, as a general guideline, should be in the ballpark of 20-30 hours per group member. | 0: No justification of feasibility. | 1: Poorly communicated or unconvincing justification. | 3: Reasonable justification of feasibility, but estimates of required time effort and/or description of relevant team experience were unconvincing. | 5: Convincing justification of feasibility, estimates of required time effort, and description of relevant team experience for each necessary component. |
You may find it useful to consult previous years' project notebooks to get a sense of expectations, both in terms of organization and depth of content, but please keep in mind that the projects may differ significantly year to year in terms of their themes and formal requirements.
The objective of this part of the project is for you to explore various design concepts, implement a limited number of low-fidelity mockups of your proposed system, develop a test plan, and carry out an initial set of tests of these prototypes that will inform your work in the next part of this project.
First, carry out the 10 plus 10 method of generating design concepts, sketching these, and refining them through discussion with your team members (and possibly others). Your group should then decide on a limited number of concepts you'd like to explore further.
Next, build paper prototypes, physical mock-ups, or other low-fidelity (specifically, not computer-based) prototypes of your top 2-3 design concepts, as you deem appropriate, in order to allow investigation and validation of the high-level decision options that your team is considering. Remember that your prototypes should not be overly detailed, in particular as you want to focus on the "big picture" at this stage. It will likely be useful (if not imperative) to employ "Wizard of Oz" techniques, with members of the design group animating the live operation of the envisioned system.
Document your work through a set of illustrations, photographs, or videos that are sufficient to describe the "working flow" of your proposed system and illustrate how users can perform each intended task.
Next, determine a set of usability goals that will be used for evaluation purposes. Usability goals are the overall characteristics that you use to define the success of your system. For instance, if you would like to ensure that your system is easy to learn, then your usability goals could be something along the lines of "New users should be familiar with all aspects of the system within 1 hour of use". There are several metrics you can use to measure such success, such as percentage of benchmark tasks completed within some pre-determined time period, number of errors made by users, and degree of user satisfaction, as expressed through positive comments while carrying out the tasks, or measured through post-task Likert scale questionnaires.
What you are interested in determining is whether these usability goals are being met. To do so, you test your prototypes on a set of benchmark tasks, which should:
Note that a benchmark task can (and often will) consist of several smaller subtasks, such as navigation, selection, confirmation/cancellation, etc.
Then, design the test materials as relevant to evaluating your prototypes. These may include an observer briefing, user introduction, pretest questionnaire, user or training documentation, a set of test tasks to be given to the user, data collection sheet, post-test questionnaire, and a test script. Some helpful examples can be found in this excellent overview on Usability Evaluations.
Apply the usability testing procedure on at least two potential users (from an actual expected user population) and document the results. You may wish to make a recording (audio, photographic, or video) of your test sessions for later review. The recordings may also be useful to illustrate something you learned about the competing designs (whether good or bad), as described below in the grading rubric. The results from these early test sessions will likely prove helpful as you modify the design of your system throughout the term.
You must first obtain consent from test users(s) before taking photographs or making any recordings. Video excerpts included in your notebook should be kept brief; all such supplemental materials should be saved in an open standard format (MP4 recommended) and compressed to fit in 10 MB or less. These should only be stored in your password-protected project notebook, and not made available publicly, e.g., through YouTube or Vimeo. It is customary (especially in a university setting) to remove nominative information concerning your test user(s) and instead, refer to them as "subject #1", etc.
The following rubric describes the assessment criteria by which your group's deliverable will be graded.
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
Design concepts | Sketch a number of significantly different design concepts. The envisioned interaction with the system should be understandable from each sketch with at most a minimal amount of accompanying text. Note that your team can select a subset of your sketches to demonstrate a sufficient diversity of design concepts you explored. | 0: No sketches or sketches completely lacked clarity. | 1: Less than 5 sketches clearly illustrating different design concepts. | 3: Between 5 and 9 sketches clearly illustrating different design concepts. | 5: At least 10 sketches clearly illustrating different design concepts. |
Prototypes | Document the prototypes that you developed to explore competing design decisions, making sure that a reader can understand how the prototypes are "used" during testing. | 0: No documentation of low-fidelity prototypes. | 1: Only a single low-fidelity prototype documented or the prototype(s) appear inadequate to convey a sense of the most important functionality of the proposed system. | 3: Multiple low-fidelity prototypes, representing important functionality of the proposed system, but only sufficient to test one main design decision. | 5: Multiple low-fidelity prototypes, representing important functionality of the proposed system, and well suited to compare across several important design decisions. |
Usability goals and benchmark tasks |
Decide on the most relevant usability goals for your system and choose appropriate "benchmark tasks" that you will use to determine whether these goals are met. |
0: No usability goals indicated. | 1: Usability goals are listed, but without rationale; the benchmark tasks chosen do not reflect the most important elements of functionality or allow for determination of whether the usability goals are met. | 3: Reasonable rationale is offered for the described usability goals, but the benchmark tasks are overly narrow, poorly defined, or otherwise do not clearly tell the user what to do. | 5: A convincing rationale is offered for the described usability goals, the benchmark tasks are appropriate for these goals, relate to the most important elements of system functionality, and only tell the user what (and not how) to do. |
Test materials | Provide samples of your observer briefing, user introduction, pretest questionnaire, user or training documentation, description of test tasks for the user, test script, data collection sheet, and post-test questionnaire. | 0: Test materials are lacking. | 1: Only limited testing materials were documented or do not correspond to the benchmark tests. | 3: Most of the requested test materials were provided, and relevant to the benchmark tests, but some items are incomplete. | 5: All requested test materials were provided, relevant to the benchmark tests, and are complete. |
Summary of test results | Document the test procedure and the results obtained. Analyze these results to draw conclusions regarding the good and bad of your designs and discuss what changes are suggested by these results. | 0: No test results provided. | 1: Only limited testing described, lacking meaningful analysis of results. | 3: Adequate documentation of testing, including suitable (brief) video of a sample test session, and some discussion of the results. | 5: Thorough documentation of testing, including suitable (brief) video, and a clear summary of test results for each of the benchmark tasks presented in a well-organized tabular form. |
At this point, you are ready to give users a clearer sense of the look and feel of your application. This means that your prototypes will evolve from paper, cardboard, and styrofoam, to a second version that runs on a computer. For clarity, "computer" in this case means that the system performs some type of information processing, whether through a full-fledged computer, a mobile processor, an embedded microcontroller, or similar.
However, you are equally concerned with learning about any design problems at an early stage, before you have invested an inordinate effort in development. Thus, in addition to implementing your second prototype, you will now formulate a detailed usability test plan, expanding on your effort from the earlier test plan. Rather than conduct the usability testing yourself, this will be the task of another team during the next phase of the project.
For this assignment, your team must implement a computer-based prototype that allows users to evaluate the "look and feel" of your system with minimal or no use of Wizard of Oz techniques. While this prototype should be based on your description from earlier parts of the project, it is understood that the design will include modifications in light of feedback and initial user testing. Keep in mind that your prototype does not need to be functionally complete, nor is there any requirement that it use the final anticipated hardware at this stage. Rather, it should simply be sufficient to give a reasonable (albeit imprecise) impression of the intended interaction to candidate users. In this regard, an initial user manual should be prepared unless such a manual is inappropriate for your user group.
Since your prototype will likely need to be taken "on the road" for testing with actual users, you should ensure that it runs on commodity hardware, is ideally agnostic of operating system, and requires no special operating skills to install and launch. If this is not the case, it will be your responsibility to make available the necessary components to the testing group providing the formative feedback on your prototype.
You must also provide a usability evaluation and testing plan for your prototype. Note that the evaluation will be conducted with no assistance from you, so you must be very explicit as to exactly what steps should be carried out.
One of the evaluation activities must consist of a formal "laboratory" experiment, i.e., a "usability test", in which some quantitative measurement (e.g., learning time before user is able to use the system as intended, number of mistakes users make before invoking a specific function, as instructed) can be made. Target numbers should be justified.
A second activity may consist of either of the following evaluations:
The point of such an exercise is is to simulate a user's problem-solving process at each step in the the dialogue to see whether the desired/expected outcome occurs. In this case, you must describe in detail several common tasks that a user might want to perform with your system so that the evaluation team can "walk through" the appropriate operations. You should not have to provide instructions of the form: "now press this button and wait for the green light to flash"!
For those groups who received feedback from their peers indicating that their usability goals for the low-fidelity prototype were unacceptable, you must first include a section in the present deliverable where you clearly define revised usability goals and justify them. You should then refer to those goals as you explain the rationale behind your evaluation plan.
The following rubric describes the assessment criteria by which your group's deliverable will be graded..
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
User manual or installation guide | A prototype user manual or installation guide, as appropriate for your project, should be provided to enable the user and/or evaluating team to use your system independently. | 0: No user manual or installation guide provided. | 1: If a user manual is provided, it assumes that the user has prior knowledge of the system, and does not describe functions that are present, what they are for, and how to use them in practical terms. If an installation guide is provided, it does not adequately describe what is required to install, configure, and start the prototype. | 3: The user manual or installation guide correctly assumes that the user has no prior knowledge of the system. The document is task-focused, presenting instructions as step-by-step procedures and most images, symbols, icons, codes or relevant jargon are clearly defined or labeled. However, the instructions do not map exactly to the installation or functionality of the prototype or some elements are not clearly presented. | 5: The document is clear, task-focused, presenting instructions as step-by-step procedures and all images, symbols, icons, codes or relevant jargon are clearly defined or labeled. The organization is logical, e.g., including a table of contents, such that instructions are easy to locate. Tasks are grouped chronologically, by frequency of use, or by functional category, as appropriate to the nature of the document. |
Design Evolution | The design is likely to have undergone some revision in light of the feedback you received, both from test subjects and from your peer assessors. You will need to provide the rationale for any significant changes that you made to the design in response to that feedback. | 0: No discussion of changes made to the design. | 1: Only a few changes to the design have been explored or carried out. However, their connection to the feedback received was not apparent. | 3: Three or more changes to the design have been explored or carried out. Most of the changes were connected to the feedback received and were expressed in terms of the factors of usability or feasibility. | 5: Three or more insightful and specific changes were made to the design based on feedback as well as new insights from the designers. For every change made to the design from the point of the low-fidelity prototype, these decisions were justified from the perspective of usability or feasibility. |
Prototype Implementation | Your computer-based prototype should be sufficiently complete to allow users to evaluate the "look and feel" of your system with minimal or no use of Wizard of Oz techniques. While this prototype should be based on your description from earlier parts of the project, it is understood that the design will include modifications in light of feedback and initial user testing. | 0: No working prototype or an irrelevant prototype is provided. | 1: The prototype is not interactive, some of the interactions are broken, or many of the elements in the prototype appear to have no defined purpose. | 3: The prototype is mostly complete, although some functions are not yet interactive. Some elements have no defined purpose and it is difficult to know how to use certain parts of the prototype. | 5: The prototype successfully implements the relevant functions necessary to allow the user to carry out all important interactions. The functionality is clearly related to the objectives from the project proposal, reproduced in this deliverable for convenience of the assessors. |
User population | Does the described user population required for the test session correspond to the user segments defined in the project proposal? | 0: no description of user population | 1: description of user population required for the test session is provided but not clearly linked to user segments in the project proposal | 2: description of user population required for the test session corresponds to the user segments defined in the project proposal | |
Usability goals | What usability goals will be tested? | 0: no usability goals | 1: some usability goals noted but some were not clearly linked to appropriate quantitative measurements | 2: each usability goal is linked to a quantitative measurement (where appropriate) | |
Usability test procedure |
The detailed usability testing plan should be complete and contain appropriate detail to enable an independent team to run the test and gather useful data. Specifically, it should address the following questions:
|
0: test procedure does not address more than one of these questions | 1: test procedures addresses 2-5 of these questions | 2: test procedures addresses 6 or more of these questions | |
Reporting | The instructions provided to the examination team should include instructions for recording and reporting the information from the testing and evaluation procedure. | 0: inadequate explanation of how the examiners should record and report information to the design team | 1: offers a reasonable explanation for how the examiners should record and report information | 2: explains how the information should be recorded and reported, and indicates the expectations of the design team and potential implications of the testing outcomes | |
Usability Evaluation | A detailed usability evaluation plan should enable independent evaluators to assess the design with respect to the stated goals for the project. | 0: No description of usability evaluation methodology or criteria to be assessed. | 1: A minimal usability evaluation plan is presented, but it is unclear, or not well thought out. Evaluation criteria are stated but not adequately justified, or do not seem to be appropriate for evaluating this system. | 3: A usability evaluation plan is presented that is likely to produce useful data. Evaluation criteria are described and justified using usability principles. However, not all of the suggested evaluation criteria are appropriate for evaluating the system. | 5: Complete and clear description of the methodology to be used for the usability evaluation and the criteria to be assessed, e.g., referring to specific heuristics from Nielsen's list). |
The objective is to give you experience in evaluating another group's design, as implemented in their high-fidelity prototype. This will be done by carrying out the exercises they have specified in the evaluation plan of their project notebook. You will also document and analyze your evaluation results, commenting on whether the group has chosen reasonable exercises to assess the usability of their system for the given benchmark tasks. If you feel that the exercises do not adequately serve to evaluate the strengths and weaknesses of the project, you may carry out additional exercises, provided that you justify their importance. Note that this does not mean you can ignore the exercises proposed by the development team (whose project you are evaluating). Finally, based on your analysis of the evaluation results, suggest what changes should be considered for the prototype system and describe how these would improve the design.
Evaluation Team | Development Team |
---|---|
Distraction Shield | Mindful |
Mindful | Time Token |
Time Token | Timebank |
Timebank | CRS |
CRS | MNK |
MNK | GCL |
GCL | conscious.ly |
conscious.ly | Embedded Art |
Embedded Art | iMoody |
iMoody | BKC (Alek, Sabrina and Stefano) |
BKC (Alek, Sabrina and Stefano) | AAK (Lea, Abed and Mark) |
AAK (Lea, Abed and Mark) | DYM (Jeremy, Danning, and Bilal) |
DYM (Jeremy, Danning, and Bilal) | FSCO (Chuning and Xirui) |
FSCO (Chuning and Xirui) | RS (Pierre and Paul) |
RS (Pierre and Paul) | Distraction Shield |
To assist you in carrying out these exercises, the development team should designate one of its members to act as a liaison to your group. This is the individual you should contact for any assistance, as required, related to the setup or operation of the prototype for evaluation.
It is strongly preferred that the usability test be conducted on a participant who is a representative member of the users for whom the system is being developed. If this user population is non-general, the liaison is responsible for putting you in touch with at least two such individuals. Please discuss with the instructor if you are having trouble identifying appropriate test subjects.
The results and analysis of the evaluation exercises you conduct, along with discussion of the relevance of each exercise to the intended tasks, should be included in your Web notebook.
The following rubric describes the assessment criteria by which your group's deliverable will be graded.
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
Testing | Have all the documented tasks and test cases been completed with target users? This includes completion of all usability testing and evaluation plans and documentation of results of testing and evaluation. | 0: No tasks and test cases, or irrelevant ones are reported. | 4: A summary of user testing results was provided but a substantial proportion of tasks and test cases were left incomplete or were not completed with target users. | 12: The summary suggests that most tasks and test cases were completed with target users. | 20: All the documented tasks and test cases were completed with target users. |
Results and Analysis | What did you observe during the tests? What comments did users provide? Did any of the tests fail? If so, why? This includes discussion of system usability on benchmark tasks as assessed through quantitative measurements or heuristics | 0: No user testing results summary was provided. | 8: Quantitative summaries of test completion results for measures and qualitative observations (as appropriate) are present for some benchmark tasks but these are lacking in detail. | 24. : Quantitative summaries of test completion results for measures and qualitative observations (as appropriate) are present for all benchmark tasks. A list of usability issues is provided and the analysis is fully justified by the observations. | 40: Quantitative summaries of test completion results for measures and qualitative observations (as appropriate) are present for all benchmark tasks. User quotes and appropriate images are used to illustrate ideas. Similarities and differences across observed users are analyzed. A prioritized and clearly well-defined list of usability issues (rated high/medium/low) is provided and the analysis is fully justified by the observations. |
Test Plan Critique | Did the test cases address the usability goals of the system? What improvements were made to the test plan to better suit the design goals? This includes the rationale for any additional exercises you carried out, or modifications made to the originally proposed test plan, and the assessment of the quality of the test plan you were provided in terms of how well it fit with the usability goals of the design team. The test plan critique examines whether or not the test plan is complete and adequately addresses the design goals of the system. Recall the elements that should be included in a complete evaluation plan. There should be a direct correspondence between the stated design goals and the measures or indicators selected for examination in the evaluation plan. | 0: No critique of the usability evaluation methodology or criteria to be assessed is presented, or the methodology and criteria are merely summarized rather than critically evaluated. | 2: The evaluating team discusses the suitability of the evaluation criteria for evaluating this system but their justifications are unclear or are not adequately justified using usability principles. Modifications to the usability evaluation plan (if proposed) are unclear, or not well thought out. If present, obvious deficiencies in the originally proposed plan are not addressed with modifications. | 6: The evaluating team clearly states judgments of the suitability of the original usability evaluation criteria. These judgments are justified using usability principles. Modifications to the usability evaluation plan (if any) are likely to produce useful data and apply criteria that are appropriate for evaluating the system. If present, obvious deficiencies in the originally proposed plan are all addressed with modifications. | 10: The evaluating team presents a thorough, complete and clear critique of the methodology used for the usability evaluation and the criteria to be assessed, e.g., referring to specific heuristics from Nielsen's list. Modifications to the usability evaluation plan (if any) are likely are likely to produce useful data and apply criteria that are appropriate for evaluating the system. If present, obvious deficiencies but also minor deficiencies in the originally proposed plan are addressed with modifications. |
Design Critique | What design improvements can you suggest to better suit the target audience's needs? This includes the formative feedback to the design team and suggestions or recommendations for improvement. | 0: No relevant suggestions or recommendations for improvement are written. | 6: Suggestions or recommendations for improvement are presented but they are not useful or do not clearly derive from the test results. | 18: The evaluating team suggested several possible changes derived from the user testing data, although not all of the changes were useful or some important issues that arose in testing were overlooked. | 30: The evaluating team suggested several possible changes based on the user testing, all of which were important and directly addressed the problems identified in user testing. |
Now that you have completed an initial prototype, (hopefully) carried out your own evaluation of the interface and obtained the results of a formal evaluation conducted by another team, you are now ready to implement another iteration of your design.
The following rubric describes the assessment criteria by which your group's deliverable will be graded.
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
Design Evolution | The prototype is likely to have undergone revision in light of the feedback you received, both from test subjects and from the formative feedback deliverable. You will need to provide the rationale for any significant changes that you made to the design in response to that feedback. | 0: No discussion of changes made to the design. | 1: Only a few changes to the design have been explored or carried out. However, their connection to the feedback received was not apparent. | 3: Three or more changes to the design have been explored or carried out. Most of the changes were connected to the feedback received and were expressed in terms of the factors of usability or feasibility. | 5: Three or more insightful and specific changes were made to the design based on feedback as well as new insights from the designers. For every change made to the design from the point of the computer prototype, these decisions were justified from the perspective of usability or feasibility. |
Prototype Revisions | Your alpha prototype should be largely functional with respect to your design objectives. While this prototype should be based on your description from earlier parts of the project, it is understood that the design will include modifications in light of formative feedback and possibly further user testing. | 0: No alpha prototype or an irrelevant prototype is provided. | 1: The alpha prototype has not developed further capability beyond that of the previous computer prototype, or some of the important interactions are broken. | 3: The alpha prototype is mostly complete, although some functions are not yet interactive. Some important functionality remains difficult to use. | 5: The prototype successfully implements the relevant functions necessary to allow the user to carry out all important interactions. The functionality is clearly related to the objectives from the project proposal, reproduced in this deliverable for convenience of the assessors. |
Refinement of user manual or installation guide | An updated user manual or installation guide, as appropriate for your project, should be provided to enable the user to use your system independently. | 0: No user manual or installation guide provided. | 1: The user manual or installation guide is provided but does not show meaningful changes from the last version. | 3: The user manual or installation guide is easily readable and clearly describes the changes that were made as a result of formative feedback (and possibly further testing). |
Based on your own testing, and further peer feedback, you will almost certainly have a number of refinements that appear important to make. Keep in mind that at this stage, your deliverable will be put to the test of the course objectives, that is, realizing an interactive technology that "enhances us as humans". In this respect, remember that you're not building a ready-for-market product, so your effort is intended to be focused on usability rather than functionality.
The following rubric describes the assessment criteria by which your group's deliverable will be graded, similar to that for the alpha prototype.
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
Design Evolution | The prototype is likely to have undergone revision in light of testing and evaluation of your alpha prototype. You should provide the results of your testing and describe the rationale for any significant changes that you made to the design in response to these results. | 0: No discussion of changes made to the design. | 1: Only a few changes to the design have been explored or carried out. However, their connection to the testing and evaluation was not apparent. | 3: Three or more changes to the design have been explored or carried out. Most of the changes were connected to the testing and evaluation conducted and were expressed in terms of the factors of usability or feasibility. | 5: Three or more insightful and specific changes were made to the design based on testing and evaluation as well as new insights from the designers. For every change made to the design from the point of the alpha prototype, these decisions were justified from the perspective of usability or feasibility. |
Prototype Revisions | While your beta system is not expected to be "ready-for-market", it should nevertheless be relatively complete with respect to functionality and should meet your design objectives. This will almost certainly require modifications to the alpha prototype in light of further testing and evaluation. | 0: No beta prototype or an irrelevant prototype is provided. | 1: The beta prototype has not developed further capability or design evolution beyond that of the alpha prototype, or some of the important interactions are broken. | 3: The beta prototype is mostly complete, but with respect to working functionality, does not clearly satisfy the course objective of overcoming human limitations. | 5: The prototype successfully implements the relevant functions necessary to allow the user to carry out all important interactions. The functionality is clearly related to the course objectives and succeeds in "enhancing (the users) as humans". |
Refinement of user manual or installation guide | An updated user manual or installation guide, as appropriate for your project, should be provided to enable the user to use your system independently. | 0: No user manual or installation guide provided. | 1: The user manual or installation guide is provided but does not show meaningful changes from the last version. | 3: The user manual or installation guide is complete, easily readable, and clearly describes the changes that were made in this final development cycle in response to further testing and evaluation. |
The in-class presentations will run for 20 10 minutes per group. These provide an opportunity for you to showcase
the results of your work. The presentation should begin with a brief
(maximum five minutes) overview, explaining the rationale behind the
system you chose to implement. You should also provide a summary of
your experiences testing and refining the system through its various
stages of development, commenting on how the system evolved as a
result of the evaluation feedback. The remaining time will be
allocated for other class members to try out the system you have
implemented and (simultaneously) a question and answer period.
Plan for demo syndrome: Make sure you have multiple "fall-back" plans available should any problems arise with your "live demo".
The following rubric describes the assessment criteria by which your group's presentation will be graded.
Activity | Details | Unsatisfactory | Bare minimum | Satisfactory | Above and beyond |
---|---|---|---|---|---|
Technical presentation | How effectively did the team convey their project achievements through oral presentation, supported by media such as slides and video(s)? | 0: The presentation was difficult to follow and did not make effective use of supporting media. | 1: The presentation was reasonably understandable and made effective use of supporting media (e.g., the slides were not simply a set of speaking notes). | 2: The presentation was well organized, the group members were clear in their delivery, and successfully used supporting media to complement their delivery of the project achievements. | |
Response to questions | Q and A is the opportunity for the audience to dig into specifics of interest, going beyond the canned content of a rehearsed presentation. | 0: Answers did not clearly answer the questions. | 1: Answers to questions were generally clear and direct. | ||
Fulfillment of course project theme | The project theme, "easing the communication burden", involves the use of human-computer interaction to help overcome human limitations. | 0: The presentation did not convincingly demonstrate that the project fulfilled the course project theme. | 2: The presentation adequately demonstrated that the project serves to ease communications tasks for a given audience segment. | ||
Effectiveness of user interface | The end result of the project's efforts will be most visible in the demonstration phase of the presentation, which may consist of pre-prepared videos and/or a live demonstration. This is the opportunity for the project team to convince the audience that their interface evolved as would be expected following the principles of a user centered design approach. | 0: No convincing demonstration of a working user interface was provided. | 1: The implemented user interface does not successfully meet the core design objectives. | 3: The user interface evolved from a design concept into what appears to be a successful implementation that would be expected to meet the needs of the user community. | 5: The user interface evolved into what appears to be a successful implementation, and overcame one or more non-trivial and non-obvious design challenges to do so. |
Last updated on 27 September 2023