Jump to content

Object–action interface

From Wikipedia, the free encyclopedia
(Redirected from Object-action interface)

Object–action interface, also abbreviated as OAI, is an extension to the graphical user interface, especially related to direct manipulation user interface and it can help to create better human–computer interfaces and increase the usability of a product.

There are basically two similar models regarding OAI.[citation needed]

This model focuses on the priority of the object over the actions (i.e. it emphasizes the object being selected first, and then any action performed on it. OAI adheres to this model.

OAI model

[edit]

The OAI model graphically represents the users' workplace using metaphors and let the users perform action(s) on the object. The sequence of work is to first select the object graphically (using mouse or other pointing device), and then performing an action on the selected object. The result/effect of the action is then shown graphically to the user. This way, the user is relieved from memory limitation, and syntactical complexity of the actions. Moreover, it emulates WYSIWYG. This feature of OAI lets the user control their sequence of action and visualize the effects at the runtime. If an action results in an undesired effect, the user simply reverses his sequence of actions.

In the action–object model, the computer is seen as a tool to perform different action. Whereas in the object–action model, the user gains a great sense of control from the feeling of a direct involvement. The computer in this case is seen as a medium through which different tools are represented, which is isomorphic to interacting with objects in the real world.

Designing an OAI model starts with examining and understanding the tasks to be performed by the system. The domain of tasks include the universe of objects within which the user works to accomplish a certain goal as well as the domain of all possible actions performed by the user. Once these tasks objects and actions are agreed upon, the designer starts by creating an isomorphic representation of the corresponding interface objects and actions.

The figure above shows how the designer maps the objects of the user's world to metaphors and actions to plans. The interface actions are usually performed by pointing device or keyboard and hence have to be visual to the user so that the latter can decompose his plan into steps of actions such as pointing, clicking, dragging, etc.

This way DMUIs provide a snapshot of the real world situations and map the natural way of user's work sequence through the interface. This means that the users do not have to memorize the course of actions and it reduces the time required to familiarize themselves with the new model of work. Moreover, it reduces the memory load of the users significantly and therefore enhances the usability.

Task hierarchies of objects and actions

[edit]

Tasks are composed of objects and actions at different levels. The positional hierarchy of any object and its related action may not be suitable for every user, but by being comprehensible they provide a great deal of usefulness.

For the user

[edit]

The most natural way of solving a complex problem is to divide it into sub-problems and then tackle them independently. Then by merging the solutions, a solution for the main problem is reached. This is basically a Divide-and-Conquer approach to problem-solving. This approach is followed in the real world by users when they perform tasks. Each complex task is divided into simple tasks. It is easy to see then, that by managing different levels within a hierarchy, the process is simplified. Through this method, users learn to execute tasks without considering the issues of implementation.

For the designer

[edit]

Ben Shneiderman suggests the following steps for designers to build a correct task hierarchy:

  1. Know about the users and their tasks (Interviewing users, reading workbooks and taking training sessions)
  2. Generate hierarchies of tasks and objects to model the users' tasks
  3. Design interface objects and actions that metaphorically map to the real world universe

Interface hierarchy of objects and actions

[edit]

This hierarchy is similar to that of the task hierarchy and contains:

Interface objects

[edit]

Users interacting with system build up a basic concept/model of computer related objects like files, buttons, dialog box etc. They also acquire a brief experience of the properties of the objects and how to manipulate the object through its properties. Moreover, they learn how to perform actions on those objects to achieve their computing goals. Hence, a hierarchy of such objects is maintained (which represent the resource of the interface).

Interface actions

[edit]

This hierarchy consists of decomposed low level units of complex actions that could be performed on objects relevant to the domain of computers as assigned in the interface objects hierarchy. Each level in the hierarchy represent different level of decompositions. A high level plan to create a text file might involve mid-level actions such as creating a file, inserting text and saving that file. The mid-level action of saving a file the file can be decomposed into lower level actions such as storing the file with a backup copy and applying the access control rights. Further lower level actions might involve choosing the name of the file, the folder to save in, dealing with errors such as space shortage and so on.

For the user

[edit]

There are several ways users learn interface objects and actions such as demonstrations, sessions, or trial and error sessions. When these objects and actions have logical structure that can be related to other familiar task objects and actions, this knowledge becomes stable in the user's memory.

For the designer

[edit]

The OAI model helps a designer to understand the complex processes that a user has to perform in order to successfully use an interface to perform a certain task. Designers model the interface actions and objects based on familiar example and then fine tune these models to fit the task and the user.

References

[edit]