Synopsis: JJ is a software agent which uses facial expressions to visualize the emotional content of network traffic. Serving as both a network surveillance tool and an empathic information visualization, JJ is implemented as a Carnivore Client, an open-source format for network surveillance applications.
While many visualizations rely on charts or graphs to convey numeric data, other visualization research has leveraged certain affordances of human cognition in order to represent information in a more qualitatively readable way. One important example of this is the work of Hermann Chernoff, who pioneered the use of cartoon faces as a tool for portraying high-dimensional multivariate data. Chernoff’s research demonstrated that our intuitive and highly sensitive ability to interpret facial expressions could be incorporated into unusually legible visualizations of complex information.
JJ is an autonomous software agent who displays facial expressions appropriate to the emotional content of the words that are presented to him. Implemented as a Carnivore Client, JJ literally “puts a face” on the information transmitted through his host network, in order to provide a data visualization of the network’s “emotional content.” JJ operates according to a mapping established between two well-known psychological databases: (A) Ekman and Friesen’s set of “universal facial expressions” — the set of face photographs which have been shown to embody basic cross-cultural human emotions (namely: anger, fear, surprise, disgust, sadness and pleasure) — and (B) the Linguistic Inquiry and Word Count (LIWC) dictionary by Pennebaker, Francis, & Booth, which categorizes the “emotional associations” of several thousand common English words, and provides an efficient and effective method for evaluating the various affective components present in verbal and written speech samples.
JJ scans his host network for text packets, reading each packet one word at a time. When JJ finds a word that matches a term in the LIWC dictionary, his emotional state (represented as an array of affective activation levels) is updated in response to that word’s emotional associations. JJ then displays a (morphed) mixture of facial expressions, weighted according to the current intensity of his different emotions. Considered cumulatively, JJ‘s expressions reflect the overall “mood” of his information environment in an extremely simple, yet direct and unmistakeable way.
At present, JJ‘s emotional responses conform to those of Pennebaker’s statistical “everyman”: for example, if JJ sees a word commonly associated with disgust, then he will present a “disgust” face. An alternate version of JJ could permit his user to modify these associations, and thus modify JJ‘s apparent personality (so, for example, a “perverted” JJ might appear happy when he hears a ‘disgusting’ word, while a “repressed” JJ might appear angry).