The notion of interactive interface is largely based on iterative design principles. The driving principles behind iterative testing are twofold: test early and often and implement regular changes, both of which ultimately inform the practice of conceptual design. However, this iteration is seen as part of the formative testing process of which the low fidelity prototyping or paper prototyping is one of the better known methods.
According to Barnum lo-fi prototyping is specifically helpful because of its low cost, recursive nature of use and as a team building tool(130). In other words, it functions in three main ways: collaborate in terms of creating a common “vocabulary” among team members, communicate the metaphor, and validate the design in its incipient stages. Although these are some useful attributes of lo-fi prototyping, its advantages are restricted to interface and product designs only. As far as complex systems are concerned, the iterative techniques of lo-fi prototyping have not been tested as an evaluation process. Redish contends that in the areas of complex, open-ended systems the evaluative process “quiet correctly” occurs at the “pre-design usability studies” (104), but at the same time there is still a lack of formative studies in these complex domains. Is it then possible to incorporate some of the principles of lo-prototype in complex systems?
The most challenging part in evaluative process for complex systems according to me is setting goals and creating appropriate tasks, as Redish mentions, “The goals, however, will usually be at a higher level than typical usability testing goals, and they may be much harder to specify” (106). Given this challenge, it might be worth considering some features of lo-fi prototyping in conceptualizing domain specific tasks and goals such devising situational metaphors, creating short but inclusive scenarios, and identifying contextual affordances. Although this is merely a way to view workflow and job flow process through the lens of lo-fi prototyping, it might at least provide an answer to the cost-benefit ratios when it comes to making formative evaluations of complex systems. Can we then make the assumption that paper/lo-fi prototyping can productively be accounted for evaluating a process design just as easily as a product/interface design?
On the other hand, Genov’s view of iterative testing is based on control systems model. This is a very useful model insofar as it is “guided by specific design and business goals.” Unlike lo-fi prototyping, this works on a set of reference values (formed by combination of higer level and concrete goals). Thus, this method presupposes a definite start and end points with intermediate feedback loops. Another point of difference between lo-fi and CST is while the former balances both negative feedback (a point of problem) and positive feedback (existing ease of use) into its process of change, CST only readjusts itself to negative feedback. Thus, to use a negative analogy it might be said that in CST, “[t] esting is [not] used as a kind of natural selection for ideas, helping [the] design evolve toward a form that will survive in the wilds of the user community” (Retting 23). Does it then imply that CST is comparably a closed-system of iteration than lo-fi/paper prototyping? How suppose the working of the abstract goals (behavioral and psychological) in CST account for evaluating the “open-ended, unstructured, complex problems”? Is it possible to determine effective reference values for complex systems that are potentially contingent?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment