Wednesday 21 January 2009

Workshop - Completing the Circle: Incorporating Evaluation in Creative work - part 2

I wanted to blog my notes on each paper from the 'Completing the Circle' Landsdown Symposium, as an online record I can access (rather than paper notes I will no doubt lose or scrunch up by mistake). Seems like a useful way to reflect on the presentations as well, a couple of days later.

Chair's Welcome

Stephen Boyd Davis raised a few interesting questions when introducing the symposium:

  • Is it necessary to evaluate creative work or does the work stand on its own without being experienced by people? Most people at the workshop advocated that some evaluation was necessary as part of the artistic process, mentioning a ?Dewey? quote that I have totally forgotten to reference in my notes. It would have been good to hear a more rounded discussion of this though.
  • Who should evaluate the work - the creator? a specific evaluator? a user? This was most directly addressed byErnest Edmonds, with most other presenters only hinting at this.

Value of HCI evaluation in preserving new media art (Piotr Adamczyk)

Unfortunately the angle this was presented from was so off-topic for me that I didn't find much to help my own work in this talk. Adamczyk's main focus is in preservation and recording of new media art (I believe he works for a museum?). He discussed what control and contribution the audience have over a piece of art, and how this effects evaluation and archiving. A final point Adamczyk made was intriguing: Would generalised evaluation methods "smooth out the rough edges" of creative work? And if so, the "inspections made at these rough edges" might be more informative than investigating the generalisms drawn by such evaluation methods. Fascinating point (maybe I got more from this talk than I first realised).

Evaluating Cause and Effect in User Experience (Mark Springett, Middlesex)

Springett used a case study of examples of user experience of e-banking web design - a slightly different type of visual creativity that added some variety to the papers. He talked about different "instruments for evaluation": actual physical tracking (such as eye tracking or galvanic skin response), asking for summative feedback, conducting probes into causality (direct and indirect) and critiquing user reactions. 

Springett mentioned the triangulation approach to evaluation (evaluating from many different angles) which was also taken up by Michael Hohl later on. I've come across triangulation before in terms of academic training, as Richard Cox and I have discussed it during the work I did for him on the Research Methods course at Sussex. Seems an intuitively good way to progress, as long as each 'point in the triangle' is concretely connected to the other points, to get an overall picture.

[N.B. Could really have done with a coffee break at this point - four presentations in one 2 hour session - and then three presentations over 2 hours 15 after lunch - meant my brain was tiring towards the end of the day]

[At this point in the workshop I was wondering: are we considering the creative work, as I had hoped, or the experience of the audience for that creative work, which is useful but only part of the picture to me. Up till this point it was very much the latter perspective which was prevalent; this continued throughout the day.]

Vision and Reality: Relativity in Art (Robin Hawes, University College Falmouth)

Incorporating psychology, physiology and philosophy. Great ideas in this presentation, that we should be careful when assuming everyone sees the same thing when seeing a piece of art, and evaluating that art accordingly - influences such as individual saccadic patterns when surveying the artwork will mean that different people have different views of the same work. Ideally I would have liked to see this idea pitched at a slightly higher academic level, with more evidence like the eye-tracking experiment for saccades (rapid surveying eye movements when taking in a larger image) to back up what he was saying. Having said that, the paper may be worth a further read to see if Hawes extends what he was saying in the presentation.

Using Grounded Theory to Develop a Model of Interactive Art (Michael Hohl)

I may have written the title down slightly incorrectly here; but the Grounded Theory explanation was the most interesting part of this talk for me. Grounded theory is a technique that seems intuitive but was officially described by Strauss in the sixties, as a social science tool for qualitative evaluation. Hohl presented participants with a piece of interactive art and interviewed them after they had finished interacting with the artwork. The process of going from verbose interviews with no pre-conceived hypothesis to theoretical abstractions was by Grounded Theory. 

As I understand it, there are four stages:

  1. Open coding - tagging the transcriptions with very general themes
  2. Axial coding - Concepts and categories are defined
  3. Selective coding - Refining the concepts
  4. Expressing the theory

I wish I'd known about this before doing a paper based on interviews... but as it turns out this process is pretty much what I did anyway, through seeing other people do research with interviews. Seems quite common sense as well.

Are you seeing what I'm seeing? Eyetracking evaluation of dynamic scenes (Group from Leeds Metropolitan/London Metropolitan)

Tracing the attentional flow of film viewers seeing what they pay attention to. Some very impressive visualisations of this attentional flow which were done in Max/MSP using the Jitter plug-in.

The researchers performed two rounds of statistics during this project. The second round was guided by repeated input from a statistician. I didn't catch much of the exact statistical testing they were doing but I think this would be worth reading about in their paper.

Using the Sensual Evaluation Instrument (Laaksolahti, Isbister, Hook)

Even though this was the talk that had been pointed out to me by two people, I have to admit I was flagging in concentration at this point. All I picked up was that the sensual evaluation instrument seemed to be made up of several small objects of different types of shapes (some smooth, some spiky, etc) that participants used to evaluate how they felt about some object of evaluation. So rather than expressing their thoughts in words, they were asked to express their thoughts via these shaped objects. Rather a nice idea to get around the problem of expressing sub-language concepts in words, but I guess this approach adds a layer of ambiguity in the interpretation - what does it mean if e.g. a person chooses the spiky object to express their thoughts about some piece? You then have to find words to express and interpret this, surely? Otherwise this research is quite limited in the value you can extract for more general evaluation (everything can't be described by shapes! e.g. for papers etc you couldn't use the shapes to describe the findings as effectively and clearly as with words?)

The speaker (Laaksolahti) apologised in advance for poor slides and said he'd had little time to prepare due to illness - this gave me quite a negative initial impression before he had even started - even though he seemed to be very pleasant during the talk. 

3 viewpoints on Interactive Art, Evaluation and user experience (Ernest Edmonds, Zafer Bilda, Lizzie Muller, Creativity and Cognition studios, Sydney, Australia)

This talk was the highlight of the day for me, probably due to Edmonds' excellent presentation of the topic matter. I have a lot of notes from this talk!

The three roles of Evaluator (Bilda), Curator (Muller) and Artist (Edmonds) were discussed, by Edmonds in person and by Bilda and Muller through pre-recorded videos. 

  • Evaluator: Concerned with human behaviour and cognition
  • Curator: Concerned with the audience/artwork encounter (I consider this analogous to a provider or enabler type of role, in a more general context)
  • Artist: Concerned with the functioning of the artwork in particular (more generally I would refer to this role as a 'creator' type of role)

Methods of evaluation: direct observation (getting 'here and now' information but perhaps disrupting the interaction between person and artwork) VS post-event recall by commenting on a video (perhaps more accurate but with some time passing between the event itself and description by the person which may affect how well the person remembers interacting with the artwork)

To get people to engage with the artworks, Edmonds discussed three ways of stimulating engagement: having Attractors to get people interested in the artwork, using Sustainers to maintain their interest, and Relaters which - I think - help the person relate this artwork to other interests of theres (not sure about this last one).

Considering aesthetics as part of the evaluation of artwork: aesthetics includes complexity and ambiguity (as a measure of difficulty) not just beauty/pleasantness (this links back to engagement, I should think). 

Evaluation done by this team uses social science software called (I think) Interact to measure interaction over time by making time-stamped observations (Edmonds et al have another paper on this). 

How do you take interactive art evaluation out of the laboratory setting? Sydney have the Beta-Space which is a public area for installations. 

One point which refers back to something that Adamczyk said earlier: Simplification smooths out over-complexity - but this isn't necessarily the right thing to do. 

Some useful references at the end of this talk: 

No comments:

Post a Comment