Creative Writing

13 pages
26 views

Knowledge-Based Systems for Automated User Interface Generation: the TRIDENT Experience

of 13
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Description
Knowledge-Based Systems for Automated User Interface Generation: the TRIDENT Experience
Transcript
  Knowledge-Based Systems for Automated User Inter-face Generation : the TRIDENT Experience  Jean Vanderdonckt  Institut d'Informatique, Facultés Universitaires Notre-Dame de la Paix,rue Grandgagnage, 21, B-5000 NAMUR (Belgium)Tel: +32 (0)81-72.49.75 - Fax: +32 (0)81-72.49.67 - Telex: 59.222 FacNamBEmail: jvanderdonckt@info.fundp.ac.be - URL: http://www.info.fundp.ac.be/~jvd/index.html Key words:   Computer-Aided User Interface Design, In-teraction Object, Model-based Approach, Multi-window-ing, Presentation Unit, Task-based Approach, Window. 1. INTRODUCTION In this paper, we report over experience gained withinTRIDENT project [2,3]. TRIDENT (Tools foR an Inter-active Development EnvironmeNT) consists of both amethodology and a supporting environment for develop-ing highly interactive business oriented applications. Itincludes several tools using knowledge-based techniquesto automatically generate a user interface for this particu-lar class of applications, i.e. tools that automatically:1. suggest to a designer a single interaction style orseveral combined interaction styles;2. select appropriate interaction objects from functionaland operational specifications;3. lay out interaction objects using multiple placementstrategies that resulted from the previous step;4. identify windows for a predefined interactive task according to a task model;5. provide a guideline document for a particular designsituation from task parameters and design options.For each of these questions, a short definition of theproblem of concern is givenand a description of theknowledge-based systemssupporting them is pro-vided. The strengths andweaknesses are then dis-cussed with respect toevaluation of these tech-niques performed with firstversions of the systems.Finally, we discuss howthese systems are evolvingand how they are plannedfor the future. Rather thandescribing how these vari-ous techniques have beenimplemented in softwaretools (for this purpose, werefer to [34] for full details), we will focus on the waythese techniques have changed and have been augmentedby additional features in order to decrease their weak-nesses. Knowledge-based techniques that are listed abovehave two common characteristics:1. they all work on knowledge bases containing ergo-nomic rules (i.e., guidelines, design rules, recom-mendations). However, we will see how these variousrules have been coded and used within the tools.2. they attempt to provide ergonomic guidance at designtime as much as possible rather than performing er-gonomic evaluation after a user interface has beendesigned. 2. AUTOMATIC SUGGESTION OF INTERACTION STYLE2.1 Definition of the Problem Interaction styles are not numerous since they basicallyconsist of a list of eleven items : natural language, com-mand language, query language, questions & answers,menu selection, form filling, function keys, multi-win-dowing, direct manipulation, iconic interaction, multi-media interaction. For a particular task and one or manytypes of users, a designer can choose either one singleisolated interaction style (e.g., form filling) or multiplecombined interaction styles working together (e.g., formfilling with function keys and command language). Thisproblem is addressed in [22], section 3.1, and in [21].This problem was especially addressed for query interac-tive tasks in [14]. Figure 1. Examples of production rules.   IF pre-requisites ARE moderate ANDproductivity IS high ANDobjective environment IS existent ANDstructure IS low ANDimportance IS high ANDcomplexity IS low TO moderateTHEN possible interaction style IS command languageIF task experience IS rich ANDsystem experience IS high ANDmotivation IS high ANDcomplex interaction media experience IS highTHEN possible interaction style IS command lan-guageIF processing type IS mono-processing ANDprocess capacity IS lowTHEN possible interaction style IS command language    2.2 Knowledge-Based Techniques The software includes production rules contained in matching tables suggesting a pool of possible interactionstyles by deriving them from three categories of pa-rameters resulting from context analysis [2] :1. task parameters : pre-requisites, productivity, objec-tive task environment, environment reproducibility,task structure, task importance, and task complexity;2. user stereotypes parameters : task experience, systemexperience, motivation, complex interaction mediaexperience;3. work place description parameters : processing type,process type.Figure 1 shows some examples of production rules thatare imbedded in the system. For each class of parameter,the systems asks the right value for the interactive task inconcern (fig. 2). Once all answers have been provided toall required questions, the small expert system automati-cally suggests a list of one or many interaction styles (fig.3) according to the production rules coded. For example,the first production rule of fig.1 is coded as a first-orderpredicate formula (fig. 4). The list of possible interactionstyles is progressively refined one class after another. Figure 2. Possible answers for each parameter.Figure 3. A possible interaction style.2.3 Discussion The first steps conducted with this approach highlightedseveral shortcomings:  it is not always possible to provide a strict, categoricalvalue for each required parameter since some taskscan combine multiple profiles simultaneously (e.g.,being somewhat important at the beginning, butcrucial at the end);  the set of production rules is intrinsically incompletenot because some rules have been unfortunately for-gotten but because empirical research has not yet in-vestigated all the conclusions for all possible values;  even for a particular domain problem, it seems hard toproduce a clear taxonomy of interactive tasks. Tasksare always reduced to a predefined set of parameters.  the matching tables tested on commercially availablesoftware showed that conclusions sometimes neededto be revisited completely according to an emphasisplaced on the available technology first, on the task after, and on the user finally. This may be due to thearea of business application [32]. 2.4 Evolution The system today evolves to a derivation of interactionstyles based on a multi-valuated fuzzy logic so that theparameters can be represented with multiple weightedvalues (e.g., high structure for 60% of actions, weak structure for the rest) resulting to conclusions with mul-tiple success scores (e.g., menu selection with appropri-ateness score of 75%, command language with 20%,natural language with 5%). Though most designers tendto solve this problem manually or according to commonpractice or style guides, they are more likely to use thesystem if   they can tailor the fuzzy logic rules to their local con-ventions (e.g., add a new production rule, edit an ex-isting rule, change the weight of a rule);  they are not forced to key in a lot of parameters (i.e.,the work load of the system does not exceed the hu-man thinking).What they most like is the mean they can justify theirchoices by referring to the relevant production rules. Thesystem is also augmented by a technique that minimizesstandard deviation between parameters values providedby designers and values which are contained in produc-tion rules. This case occurs when no actual values arematching the possible values.Though the system displays the possible values for eachparameter, it is rather "passive" in the sense that a de-signer is forced to provide all the parameters values be-fore receiving suggestion for one or many interactionstyles. The production rules are always static and do notevolve with the design's experience and knowledge, un-less the expert system engineer maitains the rules consis-tently with the design issues.   3. AUTOMATIC SELECTION OF INTERACTION OBJECTS3.1 Definition of the Problem Automatic generation of user interface inevitably raisesthe question of which interaction objects (IO) (e.g., anedit box, a combination box, an icon, a tool box) shouldmaterialize application information and actions. Thisproblem is fully described in [6,8,10,25]. Ergonomicrules for selecting IOs can be found in style guides (e.g.,IBM CUA [17]), standards, design guides (e.g., [31]), orsome empirical studies (e.g., [24]). A complete corpus of selection rules for choosing IOs is given in [27]. 3.2 Knowledge-Based Techniques A first prototype of the supporting system was groundedon several algorithms for selecting IOs from a series of functional and operational specifications (e.g., data type,data length, number of possible values, domain defini-tion, expandability by user). These specifications arepartially depicted in TRIDENT in an object-oriented en-tity-relationship model and partially specified with aspecification language called DSL (Dynamic Specifica-tion Language). As these algorithms were completelyimbedded within the system, they were rendered com-pletely opaque. As ergonomic rules were coded by algo-rithms that vary from one data type to another, they werevirtually unmaintainable.A second version introduced a set of selection rules con-tained in an expert system. There was a clear need foridentifying each possible value for each specification toavoid multiple conditions (e.g., data type = alphanumericAND data type <> integer OR ..., THEN interactionobject = list box) that are hard tomanipulate by designers. Thesystem was able to select a singleIO out of twenty-five differentIOs from seventeen parametersfor nine supported data types.Therefore, the production ruleshave been made canonical,leading to a better understandingand customization of rules, butalso leading to three shortcom-ings :1. rule redundancy : thecanonization of production rulesimplies redundancy because asame ergonomic rule can befound at different situations;2. lack of visibility andfollow-up : one salient feature of expert system is their ability tofully explain their reasoning.Executing production rulestextually is not very represen-tative for designers;3. exceeding of specification work : the morespecifications are provided, themore appropriate the selected IOwill be. But designers dislike to be forced to specifyall details of each information or action before theautomatic selection. It is too constraining and unreal-istic. 3.3 Discussion Whereas rule redundancy seems impossible to avoid, thetwo last problems have been addressed as follows :1. the lack of visibility of production rules invited us toorder rules into decision tables that can be graphi-cally represented by decision trees techniques. Theproduction rules in the expert system do not need toadhere to particular execution order since the flow of control is not determined by the order in which theselection rules have been coded. This is no longer thecase with decision tables and decision trees. Decisiontrees [11] illustrate search in a state-space represen-tation where a preferred order of rule processing isimposed. This order might not be optimal in all cases.But the fact that the selection for an appropriate IOthrough a decision tree can be illustrated graphicallyis far most appreciated by designers. Each noderepresents a state where a current IO is selected, andthe links represent a change from this state to anotherrepresenting a more ergonomic IO. This changeresults from the examination of the current value of the information specification.2. this version of selection was rather automatic andstraightforward since a IO was selected from all thespecification contained in a repository. Designers' at-titude was passive. To make them more participative,it was suggested not to force them to input all re-   Figure 4. Production rule coded in the small expert system.  quired specifications before generation. Instead, theyprovide minimal specification for each information(fig. 5). From this, the software automatically selectsa first proposal. According to the current state in thedecision tree, the software becomes more interactiveby asking designers one question at a time. The dif-ferent possible answers, corresponding to the allowedfuture paths in the decision tree, are presented.When the designer provides an answer, not only amore ergonomic IO is selected but the correspondingspecification is added in the repository. Designers aremore likely to see the direct impact of multiple speci-fications. Therefore, the decision tree could be ex-plored with forward chaining, backward chaining, orbi-directional chaining. As long as the designer wantsto proceed, a more ergonomic IO is selected.Graphical representation of decision trees (fig. 6)combined with flexibility implied by directional and pro-gressive chaining offera truly computer-aidedselection of IOs whichseems really active byestablishing explicitcontrol by designers.Another interestingfeature included in thecomputer-aided selec-tion of interaction ob- ject is its independenceacross multiple com-puting platforms. In-deed, rather than se-lecting particular Con-crete Interaction Ob- jects (CIO, [25]), thesystem is based on anabstraction of behaviour, leavingpresentation aspects outside. This abstraction is called anAbstract Interaction Object (AIO, [25]). Figure 7 detailsthe abstraction aspects for the Push Button AIO. AllAIOs are of course arranged in a hierarchy (fig. 8)according object)oriented definitions. Thus, decisionrules (fig. 9) contained in the decision tree no longerwork on particular instances of interaction objectsbelonging to different environments, but rather select anAIO.   3.4 Evolution Further evaluation of these techniques showed that de-signers rapidly acquire the ergonomic rules contains inthe knowledge bases. They sometimes know the right IOeven before going through the decision tree. But, anotherproblem emerged. Though extensible, decision trees  deliver a predefinedtraversal,  could only select an IOwhich is already reachable byselection rules,  is based on ergonomicrules that select absolute results.At the leaf nodes of the decisiontree, ergonomic rules can beadded but they are competitive orconflicting. Therefore, a relativeselection should be performed if we want to increase the expres-siveness of the selection tech-nique.Today, we are trying today to use theory of argumentation [18,23]to extend this power. At thesestages, the software suggestsmultiple solutions. Each solution is argumented in thesense that it preserves some cited ergonomic criteria (e.g., Figure 5. Control panel for computer-aided selection of interaction objects.Figure 6. Graphical Decision tree.  compatibility, consistency, dialog control, work load,adaptiveness) but violates some other. Figure 8. Hierarchy of Abstract Interaction Objects.Figure 9. Hierarchy of decision rules.Figure 11. hierarchy of abstract interaction objects.  Figure 10 examplifies a case where the designer has toselect an AIO to input/display the temperature of thehuman body. It shows the logical steps through which theuser may pass before reaching a final solution. On eachstep, a more appropriate AIO is selected by privileging aparticular criterion amongseveral dimensions (e.g.,ergonomic criteria,performance, rep-resentativeness).The argumentation displaysthe pro and contra forselecting this or that IO(fig. 11). The designer isconsequently responsiblefor the selection, but (s)heis aware of strengths andweaknesses of each choice.The trade-off couldtherefore be supported, butargumenting all steps, evenin the narrow scope of computer-aided selection,represents a tremendous amount of work.We conclude that including such a high number of rulesin a system is unable to automatically generate a usableuser interface without designer intervention. Moreover,the real ergonomic IO are reached at the end of decisiontree where convention, user preference, technologyavailability, ergonomic criteria have to be considered toget a usable result [32]. 4. AUTOMATIC PLACEMENT OF INTERACTION OBJECTS4.1 Definition of the Problem Once IOs have been selected in order to input/displayapplication's information, they need to be laid out in amore encompassing IO (e.g., a window, a dialog box, apanel). The usability of the resulting composite IO heav-ily depends on the way its IOs are placed by taking intoaccount visual design, aesthetics, user preference, task performance, clarity of layout, consistency,... 4.2 Knowledge-Based Techniques Having a hierarchy of abstract interaction objects [25],the goal is to transform them into concrete interactionobjects with their complete coordinates within the com-posite IO. The first investigated strategy for completeautomatic placement, called two-balanced columnstrategy , was based on a generic grid to filled in by IOsfollowing visual continuity. This strategy is quite staticand passive since it renders full automatic placement : thedesigner triggers the strategy and receives the output. It isbased on the assumption that IOs in the hierarchy arealready arranged in a sequence to be reproduced in theWindow. The IOs are equally shared among two bal-anced columns according to a book metaphor. This strat-egy is fully described in [1]. Figure 7. Object-oriented representation of an Abstract Interaction Object.
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x