Christopher Brooks Advanced Research in Intelligent Education Systems (ARIES) Department of Computer Science University of Saskatchewan Saskatoon, SK Canada cab938@mail.usask.ca Lori Kettel Department of Computer Science University of Saskatchewan Saskatoon, SK Canada kettel@cs.usask.ca Collene Hansen Advanced Research in Intelligent Education Systems (ARIES) Department of Computer Science University of Saskatchewan Saskatoon, SK Canada cfh928@mail.usask.ca
Abstract: While the educational technology community is rife with learning object-based specifications, tools for supporting these specifications (both at an authoring and at a delivery level) are patchy at best. This paper outlines our experiences over the last two years in building a suite of tools to deploy an online course based primarily on IMS specifications. These tools include a learning object sequencing environment, a content delivery engine, and an assessment application. Finally, this paper outlines areas where specification and tool support could use improvement, based on research conducted from members of our lab over the last decade or so.
Introduction
Learning objects have been heralded as the next big thing in educational technology. Defined broadly by the IEEE as "any entity, digital or non-digital, that may be used for learning, education or training" [6], learning objects are meant to be easy to find, obtain, and integrate into online courses. To this end, researchers, governments, and industry consortiums have created a plethora of standards and specifications outlining how learning objects can be described, packaged, aggregated, and shared. These specifications attempt to bring a kind of order to the chaos that tends to surround educational technology, and promote interoperability between the courseware offerings from various vendors. Regardless of these potentially noble goals, support for major specifications is still very much in its infancy, mostly due to the large overhead tool vendors must undertake when integrating such specifications in their products. Developing a course to be taught at a distance over the Internet is a difficult process. It requires not only that content be carefully crafted into accessible and informative web pages, but also that the tools to support self-directed learning, the learning community, and assessment be created and used effectively. This all amounts to much time and money being required to create new course offerings. In an attempt to mitigate the technical challenges involved, many universities have been entering into agreements with third party learning content management system vendors such as WebCT1 and Blackboard2. In the summer of 2002 the Department of Computer Science at the University of Saskatchewan began the development of our first distance education course. This course, designed for students who were not majoring in computer science, is a breadth course that situates computer science historically and focuses on describing entry level concepts using web technologies. A brief survey of available content management systems was done, and a decision was made to develop both the content and the tools to support delivery of the content in-house. The course developed has since been used by 122 distance education students as the primary resource for learning, and has been available as a supplement to more than 1500 students who received traditional face-to-face instruction. The goal of this paper is to discuss the issues raised in creating such a content management system. It is a product of joint work completed by the laboratory for Advanced Research in Intelligent Educational Systems (ARIES)3, and the instructional and technical support groups responsible for providing technology enhanced learning within the
1 2
Department of Computer Science at the University of Saskatchewan. The paper is outlined as follows; firstly, a brief overview of the major e-learning specifications used in our development are discussed. Secondly, the design and implementation of the iHelp Learning Content Management System (iHelp LCMS) is presented, focusing on both the environment itself and the process by which an author can construct and deploy a course. This is followed by a general discussion of the open issues that exist with respect to e-learning. The work concludes with a brief description of our future development goals.
E-learning Specifications
The IMS Content Packaging Specification [7] describes how digital resources can be organized into logical learning units called content packages. These packages are made up of four major parts; a manifest file which outlines metadata for the package as a whole (usually using the IEEE LOM [6], though strictly speaking the metadata can be made available using other vocabularies as well), a set of organization hierarchies describing how smaller units of learning (e.g. lessons, activities) should be arranged, a collection of references to digital assets (e.g. particular files in the package) which in turn can include descriptive metadata, and finally sub-manifests which effectively allow for the nesting of content packages within one another. Organizations within a content package are particularly interesting, as they break down larger learning objects into smaller pieces. Different hierarchies then dictate different paths a learner can take through the content, given by an in-order traversal of the organization tree. Instructors can then mark these paths as visible if they are to be rendered in a content management system, or invisible if they should not be shown to the learner. Content packages are typically created by instructional designers (using a tool such as RELOAD4), then imported into a content management system (such as WebCT, or Blackboard) for deployment. While content packages may be further stored in learning object repositories to be shared with other educators, this is not yet common practice. While effective at aggregating content together, the content packaging specification does not provide for fine grained control over which users should see which paths through the content, and provides no mechanisms for dynamically altering paths. Instead, the content packaging specification has been augmented by a number of specifications, in particular the IMS Simple Sequencing specification [3] and the IMS Learning Design specification [5]. These specifications are considerably more complex, and allow an author to associate fine grain rules with smaller content pieces to control the flow of content delivery. The intent of the simple sequencing specification is to provide a base level of adaptation of course material to a particular learner. This is achieved by allowing authors to indicate within a content package organization which items should be displayed to the learner and under which conditions. Conditions available within the specification include assessment conditions (e.g. those based on a given score on an exam), progress conditions (e.g. those based on the content viewed, length of time spent on content, or objectives the learner has met), and general conditions (e.g. whether the content should always be displayed for this learner). The indication of what conditions have been met is supported by associating action rules (referred to as "rollup rules") with content items, and linking these to a tracking model. This tracking model is similar to the general purpose learner model described in user modeling literature, but is restricted to only the vocabulary provided in the simple sequencing specification. Absent from the simple sequencing specification is a notion of the different roles that users can play, and how different users can interact with one another to achieve certain learning objectives. This then, is the responsibility of the learning design specification, which indicates how learners can collaborate with one another as well as other supporting actors (e.g. staff) in and around learning activities. Activities are represented using the organizations in a content package, and can be mixed with simple sequencing information to attain even finer grain control over the user experience. While electronic instructional planning has been highly researched by the e-learning research community5, there are few commercial products that provide learner-based content sequencing. Even with industry developed http://www.reload.ac.uk/ This research is both broad and deep, and can be found in the areas of user modelling, instructional planning, artificial intelligence in education, and adaptive hypermedia to name a few.
5 4
specifications, little or no adoption has been done by commercial environments (both authoring environments and content management systems) beyond the trivial. This situation is frustrating building home grown tools to provide an adaptive experience for a learner is an expensive and error prone process, especially when many universities have already entered into agreements to provide learning content management systems campus wide. However, deploying static learning content is even more frustrating, as over twenty years of e-learning research is now ready and backed by well defined specifications. The next section will outline our experiences in building a basic learning content management system with the expressed goal of attaining simple learner-based sequencing while remaining compliant with current e-learning specifications.
http://ihelp.usask.ca
condition to be checked for availability, and the third part indicates the condition to be checked to determine if the learning object has been completed. Figure 1 provides a graphical representation of a marked up content package. In this example each box represents a learning object, with indented learning object below indicating a hierarchical example. Availability and evaluation rules are shown for each learning object for one role only, that of a typical entry level student. In this example, the instructional designer has decided that the root object, on the "Evolution of Computers" should be shown to the learner once the date of May 23rd has passed. After this date, the object will be rendered, and the child objects will have their availability rules evaluated to determine if they should be shown. The designer has decided to make students traverse the course in a highly structured manner the "Inventions" learning object will only be shown once the student has viewed the "Evolution of Computers", the "Abacus" learning object will only be shown once the student has viewed the "Inventions" learning object, and so on. Interestingly, the designer has chosen to indicate that the "Inventions" learning object is only completed once all child objects have been completed, and the student has scored a mark of greater than 80% on the first test. A different approach has been taken with the "First Computers" learning object, where the designer has opted to consider the object completed as long as both the "Antanassoff" and "Test 2" objects are completed. While the designer could have restricted the availability further, the rules for the "Test 2" object (perhaps set by another instructional designer), are already fairly strict. This demonstrates the ability to encapsulate and partition complex rules of a given learning object, while retaining the flexibility to override or add to these rules in determining availability.
Evolution of Computers
role = cmpt100student availability = time_gt(23/05/2005) evaluation = completed(Inventions) && completed(First Computers)
Inventions
role = cmpt100student availability = viewed(Evolution of Computers) evaluation = completed(Abacus) && completed(Loom) && completed(Difference) && scored_gt(80.0,Test 1)
Abacus
Loom
Difference
Test 1
First Computers
role = cmpt100student availability = completed(Inventions) && time_gt(30/05/2005) evaluation = completed(Atanassoff) && completed(Test 2)
Atanassoff
Test 2
Figure 1: Annotated Content Package Once created, rules are converted into the Java programming language and are serialized into classes as "if-then" conditional statements. These classes are then compiled and stored in a database in the delivery engine.
When a given user logs into the iHelp LCMS (shown in Figure 2), they are given a three frame view where an outline of the course content is displayed on the left, navigation and tool options are displayed at the top, and learning object content is shown to the right. For each role the user is a member of, the availability rules associated with the root activity in that course are evaluated. This process is a recursive process, and continues down the activity tree, as described in Figure 1. Evaluation down the tree stops either when the availability for a node fails (and thus and children of that node will by default not be made visible), or when the user has the navigation view collapsed (and thus has chosen not to see children of a given activity). The result of this process is a set of trees indicating which nodes are available for each role the user is in. These trees are merged together (using a Boolean OR operation) into a single tree which is rendered to the user.
Figure 2: A rendered view of the iHelp Learning Content Management System By storing the rules as compiled Java statements, they can very quickly be loaded and evaluated using the Java Reflection API. Commonly, students need to know why they cannot access a learning object (e.g. which rule is failing). This tree-based evaluation approach allows us to very quickly determine which rules have failed, and can be mixed with plain English explanations to instruct the learners on what they need to accomplish to continue. It should be noted that our adoption of the IMS Simple Sequencing specification is only at the abstract data model level, and not with the XML binding that the IMS provides. There are two reasons for this; firstly, by integrating sequencing information within content packages directly some third party tools that are not aware of how it should be handled inappropriately render the contents of content packages. Secondly, the implementation of a component to read and write the IMS Content Packaging format, with our extended simple sequencing data, is a non-trivial process, and poses no benefits until third party applications begin to support this format.
Question: Label the following from this list of options: statement, function, built-in function, parameter, argument, valued function, procedure function, operator, operand, concatenation.
A: _______________ B: _______________ Figure 3: An example question from an online course If the student enters "function" for the value of A, then it demonstrates that they have the ability to remember factual knowledge about the topic "Functions". If they enter the value of "built-in fuction" instead, then they demonstrate both the ability to remember factual knowledge, as well as the ability to classify conceptual knowledge about the same topic.
Quizzes are asked both before and after a learner views a learning object. The difference between the knowledge demonstrated after reading the object and the knowledge demonstrated before reading the object is then associated directly with learning object. For example, if a student tries to view a learning object that discusses the topic of "JavaScript Functions" they are first asked a number of questions about the different kinds of functions that can exist. A detailed learner model is then built that indicates what knowledge the student already has (for instance, they may have surface knowledge of "Built-in Functions" and deep knowledge of the "Alert" function in particular). After answering the pre-test, the learner is then able to read the learning object which ends with another quiz. As is expected, learners generally perform better on the quizzes they take after reading a learning object. This net gain of knowledge can then be associated with the learning object formally (e.g. by modifying the IEEE LOM data associated with that learning object). With the mapping of content to concepts complete, the instructional planning software can now dynamically determine which pieces of content are acceptable for delivery to learners, and choose from among these based on other attributes of the content (e.g. learning styles, etc.).
Acknowledgements
This project involved a great number of people from both the Department of Computer Science Instructional Support group as well as the ARIES research laboratory. Special thanks go to Jim Greer, Gina Koehn, Jayakumar Srinivasan, Sonia Chiasson, Monisha Shukla, Carrie Demmans, Guus van de Velde, and Gayle Eyolfson who helped in the administration, design, and implementation of this project. This project was funded in part by the University of Saskatchewan and the Government of Saskatchewan through the Technology Enhanced Learning program, as well as the Government of Canada through the LORNET NSERC research network.
References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] B.S. Bloom ed., Taxonomy of Educational Objectives, The classification of educational goals, Handbook I, Cognitive Domain, David McKay Company Inc., 1956. L. Anderson and D. Krathwohleds., A taxonomy for learning, teaching, and assessing: a revision of Bloom's taxonomy of educational objectives, Addisson Wesley Longman Inc., 2001. IMS Simple Sequencing Information and Behavior Model, Version 1.2, IMS Global Learning Consortium Inc., 2003 C. Brooks and G. McCalla, "Flexible Learning Object Metadata" International Workshop on Applications of Semantic Web Technologies for E-Learning (SW-EL '05), 2005. In submission. IMS Learning Design Information Model, IMS Global Learning Consortium Inc., 2003 IEEE P1484.12.1-2002, Draft Standard for Learning Object Metadata, IEEE, Inc., 2002 IMS Content Packaging Information Model, Version 1.1.4, IMS Global Learning Consortium Inc., 2004 IMS Learner Information Package Information Model Specification, Version 1.0, IMS Global Learning Consortium Inc., 2001 D. Vogiatzis et al., "The Learner's Mirror" Ninth European Conference on Pattern Languages of Programs, 2004. M. Winter, C. Brooks, and J. Greer, "Towards Best Practices for Semantic Web Student Modelling" 12th International Conference on Artificial Intelligence in Education (AIED 2005), IOS Press, 2005.