Wednesday 20 May 2009

2nd year review

Well...

Having been on the receiving end of almost continuous criticism of my work, in my annual review yesterday, I am now at a bit of a loss as to what to do next.

Although I was expecting a bit of a grilling, I hadn't realised quite how much my proposals would get attacked. Although some of the comments were based upon my having not explained certain things properly (for example stressing that the factors I break creativity down into must be more clearly defined than creativity itself, otherwise how do you measure those factors and what benefit is there for that factor being included?), other comments were far more fundamental, questioning my entire approach.

I've gone from having a very clear idea of what I'm doing and why, and a real focus and motivation, to feeling quite lost again. Now I want to arrange a meeting with my supervisors sooner rather than later, as at this stage I wanted to be really getting on with practical work rather than still questioning what exactly I'm doing.

I guess watch this space?

Tuesday 12 May 2009

Journals

Went to a fairly non-eventful research training session today, on scientific writing. Beyond this quite useful wikiversity link on how to write in a scientific way, the only real benefit I got from this (apart from a break from marking!) was to think about what journals I want to be aiming for, for publication.

As my research focus has shifted away from music and towards creativity and evaluation this year, many of the journals I have been used to looking at have become less relevant. So now I've set up some content alerts for some new journals and need to get used to looking at different journals (ideally I want to set aside some time for this each week to browse some journal content).

Here's a list of the journals that are quite relevant for me (along with some journals on music/AI that I prob will still find interesting but not necessarily useful for my current work):
  • Creativity Research Journal (impact factor 0.57)
  • Lecture notes in AI
  • Lecture notes in CS (probably not so relevant anymore really)
  • Leonardo (and related journals)
  • Digital Creativity
  • Cognitive Science (impact factor 2.179)
  • Topics in Cognitive Science (impact factor 9.389)
  • Cognitive Science: A multidisciplinary journal
  • Minds and Machines
  • Journal of Creative Behaviour (0.429) (we don't get it)
  • Creativity
  • Creative Review (not peer reviewed, doesn't look that relevant)
  • Psychology of aesthetics, creativity and the arts
  • Evaluation
  • Evaluation review
  • Research evaluation

  • AISB quarterly (not really a journal..!)
Music-related journals:
  • Journal of New Music Research
  • Computer Music Journal
  • Psychology of Music
  • Music Perception
  • Musicae Scientae
  • Journal of interdisciplinary music studies
  • Journal of music and meaning
  • Contemporary music review

Thursday 7 May 2009

Culturally Responsive Evaluation

Found an article in the Encyclopaedia of Evaluation (SAGE publications) that gave me one or two points to think about - Culturally Responsive Evaluation (Stafford L. Hood and Barbara Rosenstein).

This article was looking primarily at educational evaluation in general (i.e. not specific to creativity) and some anthropological research as an appendix to the main article. Education is a key domain in which evaluation and assessment takes place (others that spring to mind after browsing this encyclopaedia include Health, Finance and Decision Science).

The article discusses how people from different cultures can be evaluated in the same set of evaluations and whether cultural implications are overlooked to some extent in traditional evaluation.

Leander Boykin (no reference given), working in the 1940's and 1950's, came up with "a set of 10 guiding principles, characteristics and functions of effective evaluation". I could try and look these up, although I think they've probably been replicated a few times since then if they have stood the test of time. Haven't seen this reference to Boykin before but then I've been reading work from different disciplines to this before now.

Ralph Tyer is presented as a major figure in educational evaluation. He placed emphasis away from 'achievement testing', to include instead the merit and value of teaching, influence of curriculum and student growth: in other words more 'value added' concerns.

I wonder how 'value-added' is measured in education, for league tables? I think its to do with taking results at a younger age, say SATS, and comparing them to results at leaving age (GCSEs and/or A-levels) - but then how does this work for primary schools, or to capture improvement during ages when students are not assessed (e.g. are they assessed between starting school and SATS)? Would be good to look this up.

The relevant points I got from this article:
  • Remember who is the audience you're evaluating for and what is the point of this evaluation?
  • Culture has a large influence on evaluating the 'worthiness' of something. To evaluate from a multicultural (cf. multi discipline/cross-paradigm) perspective, you need to recognise the input of these implications and include it in the evaluation methodology - perhaps including people who are familiar with a particular perspective/culture or at least take guidance from such people.

Wednesday 6 May 2009

Measuring Consciousness (talk by Anil Seth at Sussex)

Yesterday Anil Seth, one of the researchers at Sussex University, gave a seminar to the COGS research group on: Measuring Consciousness - from behaviour to neurophysiologyAn abstract and references can be found at http://www.sussex.ac.uk/cogs/seminars

This idea of measuring something which is not so amenable to measurement is very close to my work in measuring creativity, so this was pretty useful for me. 

In summary

  • Anil is advocating that we measure consciousness by combining several measures of properties of consciousness rather than by trying to find one catch-all measure of consciousness
  • He is working by examining tests that measure some property of consciousness then intends to combine the results for a more general measure of consciousness
  • The measures of consciousness/properties of consciousness are taken from a variety of backgrounds, not just behavioural and neurophysiological measures but also originating from complexity theory and even economics (Granger causality)
  • This approach is very similar to mine, the main difference that I can see is that Anil is working more in a 'bottom-up' way, integrating tests together and seeing how the tests match intuition, refining continuously, rather than a more top-down approach of determining beforehand a set of properties of consciousness then finding suitable tests for each property. His is a more immediately hands-on approach (and is I think the approach my supervisor favours for my work) although I'm not sure how he is avoiding the situation where some vital factor of consciousness may be overlooked just because few or no measures exist for it at present

For more detail... read on!

Anil started by justifying the need to make measurements of consciousness as being vital for a scientific study of consciousness: as a proof that consciousness is actually present and to what degree the thing being studied is conscious (as opposed to a discrete yes/no answer to whether it is conscious). This corresponds very closely to my motivation for measuring creativity.

The idea of consciousness existing at different levels - from primary consciousness e.g. being aware of what you see in front of you, to higher order consciousness e.g. being aware of being aware of seeing something in front of you - was quite intriguing. Are there different levels of creativity? Meta-creativity, being creative about creativity? (Is this what I'm doing, being creative about creativity?) It seems like a fascinating question but perhaps not one for my current work, it could be more distracting than practically useful. Anil didn't go into much detail on this either. 

To actually measure consciousness, Anil discussed both behavioural measures and brain-based measures, saying that to measure consciousness you can't just measure one thing, you have to combine several different measurements of different properties. The aspects you measure have to be both differentiable (i.e. you can make divisions between them, and treat them as different things) and also integratable (i.e. you can combine them all together in a reasonable way, they don't obstruct each other). They also have to be measurable. I asked him afterwards if his intention was to use a combination of behavioural and neurophysiological measurements for measuring consciousness, as his talk focussed more on the details of individual measurements rather than the mechanics of combining them. From his answer, I think that this is the intention but in practice the individual measurements are still being refined. 

This approach is very close to my own approach in measuring creativity, the idea of measuring several properties and factors of consciousness/creativity which combine for a more general measurement of that concept. Both work sees a multi-disciplinary input, combining measures from different research areas. Where Anil differs is in his approach to developing the measurement methodology. Rather than determine what properties need to be measured first, then find tests to match them, he finds tests for consciousness, then acknowledges that they only measure part of consciousness, or only fit with certain theories about consciousness, without being useful for other theories.

It seems Anil is combining together all tests that he finds to be useful (by seeing whether they produce a measurement of consciousness that matches what is expected), without worrying about the wider picture of whether this is getting an accurate snapshot of all the properties associated with consciousness. There is an ongoing reflection on how the tests are used, leading to refinement of the tests. This is especially the case when tests produce different measurements of creativity. Studying the reasons why they diverge can give further insight into consciousness, making the tests useful beyond the actual measurements they provide.

Hmm.....

I wonder if this approach will lead to a slight bias in measuring consciousness? The properties that are more amenable to testing will, by the nature of research, probably be better provided for in terms of tests available, whereas those properties that are slightly trickier to test (but nonetheless still equally valid in defining what consciousness is composed of) may not be so well catered for in terms of tests available? This is the scenario I'm trying to avoid with the approach I am taking.

But on the other hand, consciousness (and creativity) is not something that can be reduced to a mathematical formula e.g. 4*complexity + (0.5*awareness) etc etc - so is there any point in seeing which factors contribute more than others? Or is it all too subjective and potentially leading me to over-define creativity? As I'm coming across in a few of the books on creativity I'm reading at the moment, once you make a tight and fast definition of creativity, you run the risk of not measuring creativity itself anymore, but just a subset of creativity. In his talk, Anil was very careful to suggest only a working definition of consciousness, not wanting to spend time discussing the finer points of this.

I know my supervisor is more in favour of my taking a similar approach to Anil's, slowly building up a battery of tests and discarding those which don't seem useful. So far I have not been sure that this bottom-up approach is as suitable as my top-down approach, despite the extra preparatory work it entails in determining how to define creativity via such criteria. But perhaps I can attack the work from both directions, and see which pays off? (or maybe make the two approaches meet in the middle?)

I'll just briefly summarise the content of the rest of Anil's talk, the parts which could be useful for me as I look into what tests I could use to measure properties of creativity.


Measurements of consciousness through behavioural measures:

  • Objective: getting participants to make accurate choices under forced decision making conditions
  • Strategic control: examining participants' ability to use or to not use knowledge according to instructions (again looking at choices made by participant but now under different motivations for the participant)
  • Subject measures: do participants know what they know? (and can they tell us?)
  • Post-decision wagering (participants place bets which reflect their confidence in answers given during experiments - Persaud et al 2007, Nature Neuroscience) and other recent measures (e.g. Shields, Ruffman) like allocating confidence ratings to responses in an experimental situation(Anil described work which showed that post-decision wagering was really equivalent to confidence ratings)

Here different measures fit in with different interpretations of or theories about consciousness - the measures and the theories/interpretations are interdependent.

Measurements of consciousness using brain-based measures:

  • Various methods of capturing brain activity, e.g. EEG activity, ERP
  • While subjects are awake, their brain activity indicates more irregular activity than when they are asleep (and lower frequence EEG recordings? Berger 1929) - indicating that consciousness is only present when we are awake? Or present to a greater degree when we are awake? This doesn't fit in with the work by Tononi that Anil discussed, which treats consciousness as being a capacity for conscious information transfer/activity rather than the actual transfer/activity - Tononi may probably argue that we are just as conscious when we are asleep as when we are awake, its not something that can be switched on or off as we wake up or fall asleep. The capacity for us being conscious remains the same regardless of our actual acknowledgement of being conscious)
  • Dynamical complexity as a key indicator of the presence of consciousness (complex behaviour and information exchange in a dynamic system i.e. a system which can change) (Q. So is there a threshold value of complexity, so that if a system's complexity is above this value than it can be deemed to be (to some degree) conscious?...)

Measures of dynamical complexity

  • Neural complexity (all the possible combinations of dividing a collection of neurons into 2 subsets, and how much information can be retrieved from these subsets)
  • Information integration (the capacity for a system to integrate information dynamically, as opposed to the actual activity - Tononi 2004)
  • Causal density (Seth 2005, 2008) Based on Granger causality: the correlation between two variables (in one direction: seeing if activity in one variable helps predict activity in another variable)

Why use different measures in parallel? to capture subtly different aspects and to override small deficiencies with individual measures (and to avoid overemphasising one measure at the expense of others - which measure do you choose to trust). Gaillard et al 2009 is a good example of putting together multiple measures in parallel, agreeing that there is no single measurement for concsiousness but several potential measurements.

How to decide what to measure and what measurements to use? (especially if different measures are getting different results). There are some boundaries of whether something is conscious to some degree, although these aren't so discrete (conscious or not conscious). Intuition helps, and if two measures give different results, this is good for refining the interaction of measures (although surely good for criticising the relevance of the measures as well? Although Anil didn't really discuss that). Implementation details will also play a part in choosing measures: what is appropriate and reasonable to use in a given scenario.

Useful references:

  • Behavioural measures of consciousness: Seth 2008 (Consc Cog), Seth et al 2008 (Trends Cog Sci), Persaud et al 2007 (Nature Neuroscience) (post decision wagering)
  • Brain based measures of consciousness: Berger 1929
  • Measures of complexity: Tononi 2004 (BMC Neuroscience), Seth 2005 (Network Comp Neur Sys), Seth 2008 (Cogn Neurodynamics)
  • Structural properties of consciousness: Seth 2009 (Cog Computation), Seth & Clowes 2008 (AI in medicine)
  • Using different measures in parallel: Integrating multiple measures in parallel Gaillard et al 2009 (PLOS Biol)