When I was a final-year undergraduate student, the course load in one of the semesters consisted of eight regular (3-credit equivalent) courses plus one obligatory lecture series on professional engineering practice. This was typical for students enrolled in the computer engineering program at the time. What made it a real challenge, however, was one course in Distributed Systems, for which the assignments seemed to take more time than the homework in all my other courses combined. As students, we complained constantly, going so far as to petition the instructor to reduce the workload.
Years later, I realized that I'd learned (and remembered) more useful knowledge from that one course than pretty much any other from my undergraduate studies. The skills I developed from those homework assignments proved instrumental in the work I did in my early graduate studies and ultimately launched a good part of my research program at McGill. Had our class petition succeeded in swaying the instructor, I might have had some semblance of a life that semester, but looking back now, I'm grateful we didn't. I'm also aware that those "painful" homework assignments were in fact quite straightforward, but we needed to gain some experience working with the underlying functions to understand how they worked together.
This formative experience was heavily influential in shaping my teaching philosophy, and led me to focus on providing students with assignments that require implementation and testing of computer-based solutions to challenging problems. I believe that there is no substitute for the pedagogical benefits and experience that arise from applying taught concepts to such hands-on problem-solving activities. This is particularly important for the training of students who will go on to become practitioners in disciplines on which the well-being of our society depends.
Challenging assignments, however beneficial to the learning outcomes, will not achieve their objective if the students are not motivated to invest the necessary effort. For this reason, I began my teaching at McGill in 1998 by building the assignments of my Artificial Intelligence (AI) course around the simulator league RoboCup soccer competition, having the class apply the techniques they were studying to achieve improved game-playing competency. At the end of the term, I organized public demonstrations in which the students' soccer-playing agents competed against each other in a soccer tournament, held in the foyer of the McConnell Engineering building. Despite numerous problems with the soccer simulator, and what turned out to be unreasonably heavy demands of infrastructure development, class enthusiasm was very high. Two of the student groups from my class went on to publish their final papers, one in the Annual Conference of the American Association of Artificial Intelligence (AAAI) and a second in the International Conference on Cellular Automata for Research and Industry.
In terms of in-class activity, I have increasingly moved away from canned PowerPoint lectures in favour of Socratic teaching, and more recently, toward a flipped classroom model, in which students are expected to study the course readings as homework, and use class time for discussion, practice problems, and exercises.
In my Human-Computer Interaction (HCI) class, I seek to blend theory with meaningful examples of practice of the discipline. To do so, I organize the course activities around the life cycle of design, development, testing, and refinement of an interactive computer system, and bring in frequent guest lecturers to speak on their personal experiences working on problems related to each week's material.
Over the years, the singular focus on RoboCup in my Artificial Intelligence class gradually gave way to less infrastructure-demanding assignments, so as to help the students focus their efforts on activities that are more directly relevant to course content. I retained in-class competitions, at least for the first assignment on two-player adversarial games, and more recently, expanded this to a second assignment involving reinforcement learning applied to arcade games. Although risky, since there was some uncertainty as to whether the students would be capable of solving the problem, the effort was enormously rewarding, both for me and the class: most of the student teams succeeded in developing AI agents that proved competent at playing Ms. Pacman.
I make no apologies for having high expectations of my students, and being unwilling to play the game of grade inflation. Similarly, I have little patience for certain students, who fortunately remain in the small minority, who convey an attitude of entitlement to a good grade simply for showing up to class, rather than working to earn their grade. In fairness, this seems to be a condition whose genesis lies much earlier in the education system, although Ginsberg offers a damning indictment of the post-secondary system in particular, and the associated commoditization of higher education, and yes, I've encountered this at my own university, with disturbing results.
For many years, I held to the conviction that in tandem with formal assignments, final examinations served as an effective mechanism for summative assessment of learning, and provided the explicit reward structure to encourage student learning. However, I was challenged to question this belief following an unusual result of uniformly poor marks on the final examination of my Artificial Intelligence course in the winter 2015 semester, which was similar in difficulty to those of previous years.
Shortly after this experience, I attended a talk by Harvard's Eric Mazur, organized by the Faculty of Engineering's Teaching Enhancement Initiative. Prof. Mazur described several transformative pedagogical techniques that captivated my interest. Despite my initial skepticism, I was soon convinced of the merits of the approach to student learning he advocated, departing from the traditional approach of summative evaluation through formal individual examinations, which tend to encourage last-minute cramming and result in poor knowledge retention. At the same time, drastic cuts to our budget for teaching assistants made it infeasible to continue grading student assignments and offering them meaningful feedback. The solutions inspired by Prof. Mazur's invited talk could not have been more timely.
These factors motivated my effort, as the faculty's inaugural Gerald W. Farnell Teaching Scholar, to adopt an assessment strategy in which the final examination was replaced with frequent computer-based in-class formal learning assessments, with the opportunity for peer-based learning as the students work in groups for a portion of their grade. Assignments are now peer- and self-assessed, a shift of approach that served to remedy another important problem: in previous years, I was dismayed by the increasing number of students whose only in-class communications consisted of challenging their marks on assignments, and those whose only visit to my office was after final examination marks were posted. By adopting a peer-based assessment strategy, I have seen a shift of the students' perception of the role of the professor, from that of an adversary in their negotiation for higher grades, to an educator who is there to facilitate their acquisition of knowledge and skills. The pressures I previously experienced to engage in grade inflation, or turn a blind eye to plagiarism, which I steadfastly refused, have since (largely) dissipated.
The switch from TA- or instructor-grading to peer- and self-assessment offers students a further benefit in the acquisition of the valuable soft skills of critique and self-reflection: students earn marks based not only on the assessment given by their peers, but also on the quality of the assessments themselves, the latter which accounts for 20% of their grade. Part of the assessment quality mark is calculated automatically by the software platform we use, based on consistency with a theoretical "idealized assessment", and the remainder is determined by a teaching assistant who assigns marks according to the constructive value of the written feedback offered by each peer assessor.
I welcome and value student feedback regarding my teaching, especially when constructive suggestions are offered for improving the course. In this respect, students with criticisms often supply valuable ideas, even if not their primary intention. As such, I take course evaluations seriously. However, this does not mean that I will agree with every suggestion. For example, in almost every batch of feedback, there are at least a few mutually exclusive suggestions or contradictory comments, for example, some students asking for more guest lectures, others less; some students praising the avoidance of a particular learning tool ("Thank you for not using WebCT - that program sucks") but some preferring that "common platform" despite its drawbacks. And as expected, in every batch of feedback, there are some complaints about the workload or the grading, or suggestions that the examination be reduced in weight.