Saturday, May 17, 2008

The Thinking Alphabet

In the chapter on "An Alphabet That Thinks," Richard Lanham presents several thoughtful concerns about the emerging trend of electronic texts. The thesis seems to be concerned with how electronic texts, especially e-books are fast replacing the nostalgia of printed codex. The critique for most part holds that electronic culture is irreverently changing the textual face of the reading experience. Lanham steers away from this popular critique and proposes that it is time that we looked at how textual experience can be dynamically adjusted through an evolved set of alphabets--alphabets that think.

In his conception, the alphabets are personified, which can think and therefore, can exist as a rhetorical reality. To adapt to this new possibility, Lanham believes that one should conceive e-texts as having a "polyvalent" form including text, sound, and graphics. Previously the text was flat and linear bounded within a tw0-dimensional world. But the polyvalent identity of this neo-text capacitates spaces that can be utilized for dynamic embeds such as an image clip or a sound bite; in the process it renders reading more real and more rhetorical.

Although Lanham's definition of neo-text is a practical possibility, his philosophy behind creation of such text is somewhat contentious. He proposes that the C-B-S (clarity, brevity, sincerity)model of composing text has to give way to a more self-conscious production of texts that privileges a sort of rhetoric of intent rather than a representation of fact. This is somewhat disturbing. GIven the understanding that electronic communication affords customized package of information based on the purpose and context, but it still obviates the need to be transparent not only rhetorically but also directly. As such the C-B-S theory of communication holds good even when one when intends to package audience and context specific information.

Friday, May 16, 2008

Visual Pervasiveness

Postmodern public life is a collage of visual realities that constantly invades and adjusts our private lives. Nicholas Mirzoeff in his discussion on visual culture contends that to a large extent this might be the case since much of the postmodern uncertainty has been the result of a paradigm change from a predominantly textual culture to a visual culture. Wheter his thesis is true or false that is debatabel, but nonetheless he makes a very interesting supposition about dealing with visualization. Mirzoeff suggests that living in a visual culture does not mean understanding the meaning of that culture (3). In a sense the assumption purports that visual culture by itself is not experiential and it needs more than experience to understand it. Given the vast canvass with which Mirzoeff is working with this might a possiblity since he tends to define visual technology as anything that can be looked at and "enhance natural vision" (3). The second part of his definition looks at visualization not in terms of "scene" but rather in terms of "agency"-- a conduit enabling sight. From this perspective, I believe that he is right in saying that we need to understand visualization first in order to experience it. We have to know the chanell, the "fluff," in order to experience the stuff i.e. the object of vision. This definition also extends the realm of visual culture to the domain of CMC where the latter acts as the agency.

On the other hand , Mirzoeff offers another view of "spectatorship"or visualization by shifting from the technical aspect to a more philosophical aspect, which he calls the "sublime" (9). Visualization not only fragments reality as mentioned in the earlier section of his discussion, but rather it weaves a composite artistry of reality. The visual reality or representation transcends the physical perception to a philosophical realization. Mirzoeff believes that through visualization one can not only experience what is immediate but also what is implied.

In the postmodern age where the interaction with the print culture is waning, the experience of sublime through visual is definitely a new way of experiencing the old.


Wednesday, May 14, 2008

What is New Media?

In the present age and time, we hear about expressions like "connected world," "web society," "cyber community," and so on. According to van Dijk these buzz words stand to contradict the notions of "individualization, social fragmentation, independence and freedom" (1). However, on a closer observation it seems that individualization and integration are part of the same reality-- New Media. According to Manovich, new media can be understood as a computer revolution that "affects all stages of communication, including acquisition, manipulation, storage, and distribution"(19). This in turn influence the ways different types of media operate, such as "texts, still images, moving images, sound, and spatial constructions" (19).

Furthering Manovich's notion of new media, we can look at the structure of new media as a process of convergence of three different types of communications-- telecommunications, data communications, and mass communications. This idea of convergence further leads to another characteristic feature of new media: digitization. Manovich refers to this as "numerical representation" defined as a set of discrete data that are sampled and quantified (28).

Based on the attribute of digitization, new media offers a new level of interaction. The interaction in new media is chronoptic, to borrow Bakhtin's term, i.e. situated in time and space. It enables multilateral communication through teleconferencing, VOIP, Internet telephony, and so on. On the other hand tools like e-mail offers the possibility of communicating asynchronously; this is the time dimension of the new media.

These and many other attributes (not discussed here) of new media posit two fundemental questions: Who controls new media? And To what extent we should balance our learning in terms of technology and tools of technology? At the turn of this new millenia when new media is defined in terms of integration, interactivity, and digitization, how can we resolve these issues power and knowledge.

Tuesday, May 13, 2008

Fluff and Stuff

Lanham's notion of "stuff and fluff" in a way reflects Edward Bono's "lateral thinking." Lanham comments that it is important to reverse our thinking about commodities not in terms of physical components but also in terms of design of those components. It is in this sense that Lanham advocates to think outside of what is obvious or perceptible, as lateral thinking would have it.

Fluff, as we commonly understand is certainly not to be misunderstood in the context of new media. Fluff is the new media's economies of scale that determines a different kind of value than "exchange value." Lanham points out that we are "less and less constrained by material circumstance" (9), we oscillate between the foreground (the stuff) and the background (the fluff).

In the Information Age, the value of the product is therefore determined by the forces of both stuff and fluff. In other words, the value of a product is not only seen in terms of its tangibility but also in terms of its power of drawing attention. For instance, as Lanham describes, "the entire video game universe aims to make players into acute and swift economists of attention…the designer of these digital dramas is clearly an economist of attention, then, but so are the players. Parents may not need to worry so much about their children when they play video games. They may be training themselves for a new economy” (17).

In this debate of fluff and stuff Lanham proposes a new economic theory that seeks to define scarcity from the perspective of fluff rather than stuff, i.e. there is huge supply of product based on the demand but what is scarce is the ability to understand how the stuff is created. So we live in a society where we need to optimize our attention factor to really understand the value of the stuff. For Lanham, new media is the agency facilitating that optimization. It is the entry point for the new economy.

Thus, based on Lanham's assumption, I think one of the major characteristics of new media is to define not the product but its use. For instance, there may be twenty people possessing a new media artifact (fulfilling the classical economic demand-supply model) but out of these twenty only seven might actually define its use based on its conception rather than mere application. These seven people may be "attentive" enough to understand that the artifact is a function of complex of ideas that actually determine its value. Thus, our stuff is not what we "dig and grow" (as in the Agricultural Age) nor it is what we invent (as in the Industrial Age), but it is what we conceive.

Tuesday, March 04, 2008

Iteration as Prototyping and Control Systems Theory

The notion of interactive interface is largely based on iterative design principles. The driving principles behind iterative testing are twofold: test early and often and implement regular changes, both of which ultimately inform the practice of conceptual design. However, this iteration is seen as part of the formative testing process of which the low fidelity prototyping or paper prototyping is one of the better known methods.

According to Barnum lo-fi prototyping is specifically helpful because of its low cost, recursive nature of use and as a team building tool(130). In other words, it functions in three main ways: collaborate in terms of creating a common “vocabulary” among team members, communicate the metaphor, and validate the design in its incipient stages. Although these are some useful attributes of lo-fi prototyping, its advantages are restricted to interface and product designs only. As far as complex systems are concerned, the iterative techniques of lo-fi prototyping have not been tested as an evaluation process. Redish contends that in the areas of complex, open-ended systems the evaluative process “quiet correctly” occurs at the “pre-design usability studies” (104), but at the same time there is still a lack of formative studies in these complex domains. Is it then possible to incorporate some of the principles of lo-prototype in complex systems?

The most challenging part in evaluative process for complex systems according to me is setting goals and creating appropriate tasks, as Redish mentions, “The goals, however, will usually be at a higher level than typical usability testing goals, and they may be much harder to specify” (106). Given this challenge, it might be worth considering some features of lo-fi prototyping in conceptualizing domain specific tasks and goals such devising situational metaphors, creating short but inclusive scenarios, and identifying contextual affordances. Although this is merely a way to view workflow and job flow process through the lens of lo-fi prototyping, it might at least provide an answer to the cost-benefit ratios when it comes to making formative evaluations of complex systems. Can we then make the assumption that paper/lo-fi prototyping can productively be accounted for evaluating a process design just as easily as a product/interface design?

On the other hand, Genov’s view of iterative testing is based on control systems model. This is a very useful model insofar as it is “guided by specific design and business goals.” Unlike lo-fi prototyping, this works on a set of reference values (formed by combination of higer level and concrete goals). Thus, this method presupposes a definite start and end points with intermediate feedback loops. Another point of difference between lo-fi and CST is while the former balances both negative feedback (a point of problem) and positive feedback (existing ease of use) into its process of change, CST only readjusts itself to negative feedback. Thus, to use a negative analogy it might be said that in CST, “[t] esting is [not] used as a kind of natural selection for ideas, helping [the] design evolve toward a form that will survive in the wilds of the user community” (Retting 23). Does it then imply that CST is comparably a closed-system of iteration than lo-fi/paper prototyping? How suppose the working of the abstract goals (behavioral and psychological) in CST account for evaluating the “open-ended, unstructured, complex problems”? Is it possible to determine effective reference values for complex systems that are potentially contingent?

Usability: A Process of Inclusion

With due reverence to E&S TAP (think aloud protocol) I would like to argue for the first part of my discussion that B&S TAP should be adopted as the practitioner’s method for TA in usability testing as it is more rhetorically sensitive to users.

Due to lack of alternative or better method(s) in usability testing, TAP seems to take the center-stage no matter what approach is adopted. E&S approach being the father of TAP naturalizes itself into a standard for usability testing although as Boren and Ramey pointed out the E&S approach tend to leave the system out of the equation. It is true that in usability we are ultimately trying to shape the product in terms of the user and hence any adherents of the E&S would naturally argue the position of user as central and therefore any external parameter (for instance practitioner query) might pollute the centrality. But on the other hand, the more I think about user the more I think about the nature of interaction of the user. For me, the E& S method with its “Keep talking” cue seems to ignore the entire notion of interaction with the system and/or product. Moreover, the E&S cue with its mechanistic persistence precludes the role of the facilitator, who is one of key components of usability (as Barnum would argue). On the other hand the E& S TAP considers the user activity as given and in isolation from the activity which again tend to obscure the notion of satisfaction and utility since the idea of a product or a system is minimized or absent. In the event of breakdowns, is it natural for a user to “keep talking.”

Krahmer and Ummelen admit the drawback of B&S method saying, “that subject’s performance (in terms of task success and lostness) is actually influenced by the way thinking aloud protocol is administered […]” and they hope to minimize the “undesirable side-effect” (116) but nonetheless they evince that the method does have the proactive and reactive measures built into it. On the other hand, the E&S TAP with its subliminal-kind-of-a- reminder seem to throw unnecessary roadblocks in at least two potentially real situations— (a) when the user is confounded and (b) when the conceptual design/metaphor is incomplete. How can E&S TAP account for these types of criticality?

The whole TAP in usability hinges on the assurance that the user is not the topic of test but at best a domain expert feeding insights to practitioners. Given this role playing between the user and practitioner one has to account speech as a manageable series of interpolated collocation consisting of (user) utterances and (practitioner) assurances in terms of back channels and continuers. To this end Boren and Ramey rightly point out that “any time words are spoken knowingly for another’s benefit, the roles of speaker and listener exist: both parties are aware of and are reactive to each other” (267). In E& S this reactive notion is replaced by incantations which blur the context of manageable speech acts.

Further, there is an immense rhetorical significance inherent in any testing situation and more so in usability testing since there are definable exigences, purposes, contexts and constraints. All these aspects obviate the need for agents and agencies where the user act as a defined agent, TAP being the agency, and the facilitator being the co-agent or the neutral agent who negotiates the constraints (system breakdown, incomplete metaphor, etc.) As such it is important that the facilitator as co-agent take part at least in the ways suggested by Boren and Ramey—“ […] practitioner necessarily slipping into the more technical role of troubleshooter (and possibly the reassuring role of apologetic host)” (272). In E&S the role of the facilitator is more passive than neutral and therefore tends to ignore the rhetorical aspect of the process. Is it then possible to address the rhetoric within the E&S TAP framework? or, Does E&S TAP framework completely ignore the rhetorical aspects fundamental to a test situation?

Given these necessary usability entailments and others, such as lack of importance to user satisfaction (E&S consider this as l3 data), I tend to view B&R TAP as better adapted to usability testing and rhetorically more responsive. As a result, the descirtptive-orinetation embedded within usability should necessiate more inclusivity than exclusivity if it has to develop as a discovery tool.

Norgaard and Hornbaek reflect on seven important issues about the process of usability testing.

They are:
Lack of immediate post-session follow ups
Tasks are designed as confirmation of preexisting notions
Contexts influence tests
Questions were hypothetical and less experiential
Usability perceived as an experimental in nature
Measure of utility is infrequent
Asymmetric data analysis

I have few specific questions related to some of these foregoing observations:

-How does immediate post-sessions evaluation help in usability given the constraints of time and productivity?
-How can we apply methods of discovery to task designing?
-If rhetoric is important, how can we control certain variables?
-Is it really possible or practical to control some of the situational factors?

Finally, I think as part of the usability testing we ought to broaden the scope of MEELS by including both utility and rhetoric as part of the discovery process since both product and test/user are affected by U and R respectively.

Accounting for Homogeneity Assumption

Virzi’s homogeneity assumption in usability test definitely addresses the problem of cost-benefit ratio and at the same time makes a case for small-sized companies to invest in usability-testing. However, as exemplified by Lewis modification of the binomial model, the homogeneity breaks down when the test is concerned with sub groups. K &K earlier noted that there is a need for a “varied user […] in cases where the system is designed for organizations and not for individuals” (K&K 2); how does then Virzi’s assumption hold true in case of organizational set-up. If there is a need to identify unique problems we would need distinct heterogeneous groups (Caulton 5), which in turn challenges the homogeneity assumption in terms of decreased power. How then we could resolve the problem of decreased p with increased number of heterogeneous subgroups which account for higher proportions of unique problems? I think one way to do it is to identify the representative sub-groups from the possible distinct groups. But, how do we account for these representative sub-groups from the heterogeneous sample size?

Also, I think that Vatrapu’s concern of addressing cross-cultural imperative in structured-interview can be related to the homogeneity assumption. In my view, culture as an independent variable plays an important role in indentifying unique problems. Therefore, I think the question of unique problems would not depend on increased number of heterogenous subjects as far as the cultural impediment exists. On the other hand, there the homogenous assumption would hold good for indentifying shared problems even if there is a manifest difference between the interviewer and the subjects. Thus, when designing interface for international audience is it still possible to identify unique problems within the homogenous group if the cultural barrier is removed.

Sociology of Usability

Usability testing, for me is a practice that stands at the cusp of engineering and social science: engineering because it tends to be normative in its aspects (we have guidelines for testing, heuristic evaluation, and walkthroughs), but at the same time it is an approximation that tends to account for users’ ease of interaction. It is with this involvement of user that the idea of ‘social’ is implied.

In Post-Modern Usability, Lund makes an interesting point about the evolving status of usability as a practice. He points out that usability has taken a leap from what used to be purely error-fixing to “embracing existential understanding of the user […].” It is in this sense that usability has afforded a sociological lens into what was originally seen positivistic.
The fact that there is a ‘human’ element in the ‘science’ of usability, it opens up to lot a more challenges than what it would in the context of a clinical investigation of laboratory set up. As such, I see usability as contingent on factors that cannot be and ought not to be controlled by its practitioners. If Lund’s claim of “existential understanding” is an acceptable starting point of our (post-modern) definition of usability, then for me, it is important that I consider ‘situation’ as a strong factor. This situational factor could range from user’s psychographic and demographic background to the site of testing including its attendant conditions.

Hence, my question is as we start to implement our testing, is it possible to manage and take these contingencies and situations into account. To what extent can we except to have these existential elements considered for before or during our testing?
Also, as we implement our task-oriented approach in testing, are we implicitly arrogating ourselves into the process, or is there still a room for the unforeseen.
Finally, as I perceive (though of course with a beginner’s vision) usability testing as the intermediary between what is known (past and present) and what is unknown (future) from the user’s point of view, where is it heading as far as this approximation is understood.

Determining the Externalities

The gravitas of the readings was concerned with the dichotomy of method versus object of criticism. It seemed that the suggestive role of a critic, especially in rhetorical criticism, is to mediate between the application of models of criticism and scratching the historical, social, and political veneers off the rhetorical artifacts. Additionally, it appeared that the ways models are being conceived as external to the artifact, except for what Farrell’s mentions “the didactic model of criticism,” there is an overt tendency to situate rhetorical frameworks as derivatives of the communicative studies seemingly speech and psychology and their theories.

To turn to Farrell, he intuits that models can be drawn from the “phenomenon in question,” but for rhetorical criticism he posits that “it would be nice if the model came from some field amicable to communication” (302). As a result, his discussion on symptom criticism and thematic criticism is largely hinged on abstracting constitutive features of communication theories albeit without a detailed descriptions of what possible frameworks could be rhetorically contributive. It is interesting in this sense to note how the idea of critical rhetorical models are more influenced by the derivative features from a shared discipline and less by a more intrinsic, neo-classical constructs. On the other hand, the didactic model subsumes the neo-Aristotelian principle of anti-subjectivity as Farrell concludes, “These ideas [the attributive features of communication] can no more be reduced to the artist’s intentions than they can be reduced to a limited audience’s coded response” (313). How then is it truly possible for a didactic critic to assay the “emergents” without broadening the objective function of the criticism? Why aestheticism is subordinated to the structure of the composition of the artifact in the didactic model when the rhetor’s persona and voice set the tone of the discourse?

Again, in Campbell there is a strong insistence on viewing the critical act as functions of operative constructs, or what Richard E. Young describes as “an eye to see with.” According to Campbell then, this eye is essentially the subjective eye of the critic which sees through the differentiated visions—reflexive, cognitive, dialectical, and evaluative (5). To me this is less than a model because these typologies are less differentiated than the models proposed by Farrell; also, the paradigm shift, as Kuhn would define it “replacement of one conceptual model by another,” is less obvious than any overt conception of models. Here as the critic internalizes the “recreative” and the “appreciative” (6) mode, she is no longer detached from the object of criticism, but she moves through these pervious critical modes that Campbell labels as subjective. Thus, in her article, there is a negative detachment between the object of criticism and the critic, unlike Farrell’s. For her, critical act occurs by reconciling the Cartesian split of the observer and the participant. However, even in this supposition, the author primes her theory with communicative studies and exerts that criticism (rhetorical) “should be autonomous, independent of the theory, methods, and criteria of other disciplines […]” (12). It makes me wonder if this intellectual recluse is ultimately helpful, let alone possible (since modern critical response cannot ignore the roles of cognitive and perceptual psychology). Does not this view stand in direct antithesis to Habermas’ call for communicative action, i.e. creating interpretive and emancipatory channels of communications by building knowledge through subjective and inter-subjective forces and sharing it across disciplines?

Talking about externalities, Black’s notion of “clusters of opinion” suggest innovative ways of appraising a rhetorical artifact. To him, “The rhetorical critic requires a method of analysis that enables him to connect the convictions that people have with the discourses that they hear and read” (169). This obviates the here-and-now imperative in a critical act. Following this requirement, I have identified the clusters of opinions (the symptoms and discursive constituents) in The Clash of Civilization. To begin with, Huntington borrowed his title from an article by Bernard Lewis, The Roots of Muslim Rage published in September 1990 issue of the Atlantic Monthly. Huntington prima facie gathers his choicest cluster from Lewis’s highly partisan ideologue, best described by invidious signifiers like “Western models”, “revulsion against America” “Western civilization,” “anti-Westernism,” “anti-Americanism,” against “House of Islam,” “Muslim dissidents,” “Muslim hostility,” “Muslim possessions”, “Muslim masses,” so on and so forth. As such, one may find verbal resurrections of these types of interventionist suggestions and aggressive deliberations throughout Huntington’s essay creating a straw man for the “us” versus “them” argument. One may also identify traces of variegated clusters of opinions in the essay derived from the works of Francis Fukuyama (End of History), Paul Kennedy, Robert Kaplan, and Benjamin Barber (Jihad versus McWorld). Thus, it can be said that for Huntington, these clusters of opinions act as surrogates to his ideological thesis

The Consumed Expression

For Burke there is no rhetoric without identification and it must take place before the act. This identification occurs through the associations of shared properties/substance which Burke refers to as “consubstantial” (Foss 174), and in order for the rhetoric to happen there must be a perceived consubstantiality between the agent and the individuals who complete the communication cycle. This notion of identification is an a priori condition that distinguished from the neo Aristotelian construct of a rhetor. With a neo Aristotelian rhetor the aim was persuasion by collocation, for Burkeian rhetor the aim is persuasion by identification. Another major difference between the neo Aristotelian and Burkeian rhetoric is for the former the act of persuasion has to be articulated either in speech or in writing whereas in Burkeian constructs persuasion can occur without a single oral or verbal articulation.; it is the identification of the unconscious, which resonates Jungian idea of the “collective unconscious” or the “objective psyche.” This principle eventuates rhetoric to a very powerful realm of possibilities. It also clarifies Burkeian approach to rhetoric insofar as its medium is understood as language in all its symbolic manifestations and not as speech, which again is neo Aristotelian in form.

By ascribing descriptive privilege to language, Burke makes it possible to study the human motives as derivative of language use. In this way, I think we can see language as not merely prescriptive and restrained by its normative character, but as a protean compound capable of adding and subtracting symbols. In his diachronic analysis of Hitler’s rhetoric, he skillfully reveals the descriptive structure of language by identifying the numerous symbols underlying Hitler’s rhetoric. Burke showed that Hitler primarily used words and symbols from the religious domain and materialized them into a more rabid political form; this is a case of descriptive use of language as Burke points out, “One knows when to “spiritualize” a material issue, and when to “materialize” a spiritual one” (Burke 48). Had language been only prescriptive, the interplay of motives would not have been possible and hence, no rhetoric would occur. Burke also shows that Hitler not only manipulated the symbols (in this case religion) for instance, the idea of “individuals […] surrounded by a movement, the sense of “community”(48) associative of the notion of congregation but also predicated its use in systematic form (the idea of victimage of the Jews by contrasting their notion of individualism as selfish versus German sense of individualism as sacrificial).

Viewing language in such descriptive organic form, we can very well understand it as the fount of figures (of speech and thought), diction and tropes (Fahnestock 8). In her marathon of an article, Fahnestock is arguing the importance of figures as “epitome” of expression. Her contention is that figures baubles of language but they are the creative underlife of human expression contained in the scientific field as much as it is in the humanities. In this sense, I think, she classifies figures from diction by the principle of association and substitution— how closely it resembles ““normal” in the sense of acceptable usage” (15) and how easily it can be replace or reduced to “degree zero” choice (16). Is it then possible to measure or analyze metaphors by these benchmarks? W Thus, by this measure we can analyze how far the vehicle or the frame is removed from or can be reduced to the tenor or focus (Buurckholder & Henry 108) , the latter then being the Fahnstockiean notion of the “normal” or the “degree zero.” Thus, given the pervasiveness of symbols (seen in terms of figures, tropes, etc), I''m interested to know two things: Is it possible to create a value-neutral expression? and, Does metaphors control meaning or does meaning shape metaphors?

The Clash of Civilizations

To put this artifact into the Burkeian frame of dramatism would reveal some fascinating truth about the motives of this rhetoric. The act: balkanizing the ontological tenor of civilization; the agent: the political agenda (internal); a former conservative in the National Security Council (external); agency: rhetoric of identity strengthened by the fallacy of stereotyping –“Islam has bloody borders” (Huntington 12); purpose: arrogating the subjective thesis that “the clash of civilization will dominate global politics” (3); and, scene: post Cold War.

I am tempted to exhaust Burke, but to allow the privilege of completion, I will draw rein to my impulse.

Polarization of Rhetoric

The readings for last week were beset with claims and counterclaims: Black versus Leff and Bitzer versus Vatz. These contending scholarly views tended to focus on two main questions: What are the ways of critiquing the rhetorical situations? How a situation is rhetorically defined? Both these questions have been answered and critiqued upon with qualified contradistinctions and the fallout is we get a mélange of critical convictions.
I read Black as someone who is splitting criticism into two watertight compartments albeit offering a stiff possibility of reaching a critical middle ground. For him the ‘etic’ form circumscribes the critic by defining theoretical limits whereas the ‘emic’ form teases the essence out of the artifact. Given these either-or dyad, I feel that it Black himself is dismissing the complexity of the scope of criticism. For him, though emic criticism holds greater openness, it is only potent enough to analyze but not to evaluate an artifact as he resolves, “ I don’t believe that a critic should evaluate an object emically, but an emic interpretation may be an avenue into a fair and full etic evaluation” (334) I wonder if this is actually possible when someone intends to “vibrate” (Leff 345) emic form on a situational analysis. For instance, if I were to situationally analyze 1994 genocide in Rwanda, would I not want to evaluate the massacre in terms of the loss of human values rather than to bring in a theoretical perspective (etic) to judge how morally and emotionally devastating it was. Why do I have to seek refuge in the mechanistic constructs of etic criticism for the judgment of a situation that clearly merits humane discernment? Perhaps the response to this question may reside in Leff’s conviction that “abstract theories and models provide no rules for their connection to a particular phenomenon, and the study of this connection moves us outside the realm of any formal system” (344). And thus the process of connection (to humane subjects) is not mediated by models rather it is dependent on our “hunches” to seek out the matching modes (emic or etic).

We find an interesting phenomenological shift in Bitzer from whom this hunch can never be intuitive. Actually, he contends that there is hardly a room for a hunch, so to speak, in any rhetorical situation. If there is a situation that can be defined as rhetorical then there are discourses based off the conditions of the situation. In this sense, Bitzer reflects what Perelman calls “demonstration.” Like the demonstration where “a calculation is made in accordance with the rules that have been laid down before hand” 1(Perelman 13) the rhetorical utterance owes itself to “a rhetorical situation [that] must exist as a necessary condition of a rhetorical discourse […]” (Bitzer 6). For Bitzer, the whole idea of rhetorical situation is centered on modifying or diffusing the exigences. Therefore, if there is a situation that is rhetorically responsive then a rhetor must be able to modify or control all the dominant exigences based on the controlling exigence; in other words the modifying ingredients are already available to the rhetor in terms of the audience, the controlling exigence, and the constraints. Thus, to bear this notion once again on the example of Rwandan massacre, it would seem a classic case of historical materialism that coerces a deterministic view upon a situation in order to attribute it as rhetorical. Steering clear from this philosophical warrant, Vatz presents his own notion of perspectivism. He makes some interesting claims as far as “situational” determines rhetoric. For him situations are not reflected by language but translated and hence, it is the rhetor who determines meaning of the situation and not the conditions inherent in the situation. Does not it then dilute the whole categorical imperative on the part of the rhetor? How can we then distinguish between a rhetor and a translator? In my opinion, it might help us to think of this rhetor not as translator but as a transcreator who bears upon himself the ethical responsibility of communication. On the other hand, his notion of rhetoric as “[…] a cause not an effect of meaning. It is antecedent, not subsequent” (160) makes the claim of reducing every speech act as rhetorical. He further says, “To say the president is speaking out on a pressing issue is redundant” (161). This view arrogates upon the idea that the whole notion of public speech is necessitated by rhetoric of convenience (on the part of the speaker-translator) rather than by any act of ethical, moral, and political justification.

Clash of Civilization
Schiappa celebrates the expansiveness of rhetoric of forms—“the rhetoric of X” (260). Extending this view, I see my artifact as representative of lots of inclusive forms. Inclusive because all propositions aspire toward one single condition of what Donald McKenzie would label as political determinism of the western powers. Let me illustrate this with a very overt example—the section headings. “The next Pattern of Conflict,” “The Nature of Civilizations,” “Why Civilization will Clash,” “The Fault Lines between Civilizations,” “Civilization Rallying: The kin-country syndrome,” and “The West versus the Rest” are only few of the innumerable invidious forms that one may find in the essay. These headings runs into the fallacy of begging the question in attempting to ground the claim— The Clash of Civilization? The title would seem less of a hypothesis which the author is attempting to prove intellectually and more of a rhetorical question whose responses are obtrusively clear in the section headings.