ELF: The Electronic Learning Facilitator

Recent years have seen an unprecedented increase in the use of the Internet and other 69 global networks of computers. We have seen estimates of between 100,000 and 1,000,000 new users each month joining the Internet. Predominantly these are information seekers, but the figures do also represent a significant increase in the number of information providers. For learners, we now have a significant set of information sources, albeit obscured by a lot of noise. DOI: 10.1080/0968776950030111


Introduction
Recent years have seen an unprecedented increase in the use of the Internet and other global networks of computers.We have seen estimates of between 100,000 and 1,000,000 new users each month joining the Internet.Predominantly these are information seekers, but the figures do also represent a significant increase in the number of information providers.For learners, we now have a significant set of information sources, albeit obscured by a lot of noise.
There are many different types of information sources now available, all with varying attributes.For many years, the Internet has provided email.For learners, this has allowed direct contact with the authors of research papers at different (and sometimes the same!) institutions.The Usenet News system has enabled learners to reach a far wider audience, with less intrusion (since it is up to each user to decide to read the news, whereas email is generally always read).Again, this provides a good resource for the learner to find out more by asking questions and receiving a wide variety of responses, although often this requires wading through a variety of 'me-too' responses, and flames (inflammatory responses which tend to solicit ad hominem attacks).Electronic mailing lists are the email equivalent of a newsgroup, although they are harder to find out about, but tend to have less noise.FTP sites are another, now familiar, information source.Here electronic copies of research papers can be stored at an FTP (File Transfer Protocol) site and others can then download (copy) them to their local machine for printing and reading.This is a rich source of information in the sense that there is rarely any noise stored at an FTP site, but it is hard to know where to look and what to look for,.exceptfrom some other information source (such as a News article).
Recently, new types of information sources, such as the World Wide Web (WWW), Gopher, and the Wide Area Information Servers (WAIS) databases have proliferated (Berners-Lee et ai, 1992).Gopher can be viewed as a distributed menu system, where users select items on a menu which may lead to another menu, or to some actual information (such as a picture or a research paper).The WWW is a distributed hypertext system where links to objects are embedded within other objects (where the objects can be text, pictures, video, and other multimedia).Both Gopher and the WWW support the browsing model; the user has actively to follow links to find information, in contrast to email or News where the user passively receives information.WAIS databases are essentially free-text search engines which allow for keyword searching of collections of documents.
The increase in the desire to use these networks, and the increase in the amount of information being made available over them, is causing problems for both experienced and novice users.This paper will introduce the problems, outline a possible solution, and end with an indication of current progress.

Problems
Information sources such as mailing lists and Usenet tend to overwhelm users with the sheer volume of information, often with a low ratio of signal to noise.Both can be thought of as unsolicited information, where the user must invest large amounts of time in reading to find the nuggets of useful information.By contrast, users of the WWW often have a different problem, that of navigation, by which we mean knowing where the desired information is to be found.With WAIS databases, the problem is similar: knowing which database to search.There is a directory-of-servers (a database consisting of short descriptions about a large set of registered databases) to help with this, but as with any centralized solution, it can be tied up with the multitude of requests from users around the world.It is also not complete nor up to date.
Our current work is predicated on the belief that there is useful information in all these information sources, but the trick is to know where to look.For example, one often finds out about new WWW sites from the WWW Newsgroups.Another major source of pointers to useful sites is from colleagues.There are also index sites which build a database of pointers to other sites using some keyword mechanism to act as the key into the database.These include the WAIS directory-of-servers, and there are several others on the WWW (such as the Jumpstation, and the WWW Worm).

A solution -ELF
ELF is an attempt to address these problems.It is essentially a collection of agents which co-operate together to aid users in finding information of relevance to their interests or tasks, in a timely and efficient manner.One of the main aims of this project is for ELF to work without explicitly requiring direct control by the user.To do this, ELF monitors the user at work to determine (from files recently edited, for example) the areas of current interest.ELF attempts to categorize the current work by a few keywords which are then used to seed a search conducted by various agents.The results of this search are presented to the user at some appropriate point (to avoid unnecessary interruptions in the user's current task).ELF can also be controlled directly by the user.This means that the user controls what information sources are searched, and decides on the search query terms.
There are two major sides to ELF.The first handles the user, and attempts to gain knowledge about his or her tasks.The second is a back-end which consists of agents that 'know' how to query and search the various information sources.We will discuss each of these parts of ELF in more detail below.
ELF allows for varying interaction with the user.The minimal level of interaction involves a discrete icon on the screen from which various control panels can be accessed.The only other ELF tool that needs to be seen is called PrISM (Personal Information Space Manager).This is a hierarchical structuring tool which can be used as a simple file/directory browser, but can allow for grouping together related tasks.PrISM allows access to the keywords that ELF calculates for a directory or file, and provides for the user to edit these.Users can also use PrISM to determine and control which agents are currently working on their behalf.While it is indeed true that the hierarchical structuring model leaves a lot to be desired, it is easy to understand, and matches naturally to the common computer file system.ELF provides several different mechanisms for notifying search results.In the case where a search finishes and the user is currently logged on, ELF can notify by forcing Mosaic (or certain other WWW browsers) to pop up on the screen with a page containing pointers to the results found, using the fact that Mosaic can access most types of information sources, thus providing a simple click-to-access interface to the results.Another option is to indicate within PrISM the presence of results at the relevant part of the hierarchical structure.A third option is to provide an xbiff type program (xbiff is a small icon that notifies the arrival of new mail by changing colour) to flag the arrival of results.These clearly vary in intrusiveness to the user, from the popping-up Mosaic to the small discrete icon in the corner of the screen.
If the user is currently logged off when the results arrive, ELF can either send a mail message containing the pointers to the results, or allow for PrISM to indicate the presence of results the next time the user logs on.
More experienced users can control ELF directly, using search parameters, and requesting immediate searches (cf.traditional database searching where the user makes a query and waits for the result).ELF also collects statistics about which keywords seem to work best, and which information sources produce the best results (as selected by the user).These can then be used either by ELF or the user to tailor the searching to fit in more with the working practices of the user.
The back-end of ELF consists of the agents that perform the actual searching of the various information sources.One agent (called the News agent) is responsible for querying a database of news articles.One of its responsibilities is to find articles that announce new WWW sites.This is done in a simple way by looking for ANNOUNCE (or some derivative), followed by a Universal Resource Locator (URL) indicating where the WWW site is.
Another agent interfaces to the Jumpstation search engine on the WWW.It mimics a user-request to the Jumpstation and reformats the returned result (which is a WWW page of links to the results) before presenting it to the user.Similarly, agents can be provided for other WWW search engines, such as the WWW Worm and ALIWEB.
Other agents can include WAIS searchers, Gopher searchers, and mailing-list handlers.Agents can also be designed to access interactive services such as BIDS (one of the library catalogue systems).One of the drawbacks of providing agents that use the same interfaces as the user, rather than requiring database implementers to provide a special Application Programming Interface (API), is that if the interfaces change, the agents will no longer work.This could be as simple a change as a rearrangement of some menus.We will address this issue somewhat in the next section.
Since different information sources provide different search strategies (such as Boolean exact matches, set operations, or partial matching), some agents have to do more work than just sending a query on and retrieving the results.For example, an agent that accesses a search engine that only accepts a single keyword, might need to submit several searches for a multiple keyword query, and then combine the results, so that, as far as the user or the rest of ELF is concerned, it answered the query in one fell swoop.

The departmental ELF
As outlined above, since ELF is a computer program, it can generate large amounts of queries in a short space of time for long periods without rest (after all, this is why we replace people with computers).It was quickly realized that if everyone used an ELF, then the networks would be flooded, and the search engines accessed by the ELF agents would overload.To address this, we propose a departmental level ELF.This ELF will be responsible for actually doing the work.The agents of a user's ELF will submit their queries to the departmental level ELF> which then has the opportunity to multiplex queries across users (in other words, perform duplicate queries only once).Results can be cached and used to answer later queries (recognizing that the results may be out of date).The departmental ELF could also help mimic the Word of Mouth phenomenon by allowing for transferring of information between user ELFs.Another benefit of the departmental ELF is to provide centralized management of agents, which addresses the changing interface problem alluded to in the previous section, by.requiring just one place to be changed.

Current state and future plans
A prototype for a single ELF is near completion.This ELF includes a database of news articles (dynamically changing due to new articles coming in and old ones expiring), a News agent that operates over this database looking for announcements or new WWW sites, or just general keyword searching.Other agents include a Jumpstation agent and an Archie agent (Archie is an index of most FTP sites in the world).The prototype also includes PrISM, and a user-monitoring agent which periodically searches the user's directory space for new or changed files, and attempts to categorize them with a few keywords, using a combination of full-text analysis, typed-text analysis, and clustering techniques.This ELF runs under the Unix environment running X, and is implemented in a mixture of C++, TCL (Tool Command Language) and Perl.
The development of ELF will involve iterative design, with user testing at each stage.By employing usability metrics, we are aiming to test groups of students, lecturers and researchers using the initial prototype.
The development of ELF as a tool to support the process of learning in a highereducation context is being conducted within an overall framework provided by the concept of the Intensely Supportive Learning Environment (Sykes et al, 1993).This adopts a constructivist approach to learning (Kommers et al, 1992) in which the learner has available a range of tools for the performance of learning tasks.The process of learning through active seeking and structuring knowledge is emphasized at the expense of the traditional view of courseware as packaged and pre-digested exposition.An important issue raised by this is the identification of the point at which the work done by ELF agents might become counter-productive for an individual learner.
Final aims of the project are to build a departmental ELF, to port the user interface to MS Windows, and to release the system to the UK academic community.