Some really interesting statements here, whether or not you agree with all of them. From Brian Reffin Smith's blog. My favourite is #41...
http://zombiepataphysics.blogspot.com/2010/03/43-dodgy-statements-on-computer-art.html
Tuesday, 15 June 2010
Wednesday, 7 April 2010
Empirical approaches to Performance: Empirical Musicology II conference
Empirical approaches to Performance: Empirical Musicology II conference (25-26 March 2010, School of Music, University of Leeds, UK)
Whilst musicology can traditionally be highly theoretical, this interdisciplinary conference emphasised an empirical approach, presenting a diverse range of different scientific/practical approaches to the study of music. Focussing on music performance, the conference brought together people from a variety of academic backgrounds to share knowledge and methodologies across disciplines.
The chosen two keynote speakers (Eric Clarke and David Temperley) represented two areas of the spectrum of research covered during the conference. From his standpoint as co-editor of Empirical Musicology: Aims, Methods, Prospects (2004, Oxford University Press) and as an influential musicologist in this field over many years, Eric Clarke gave a historical and critical overview of the use of empirical methods in music research, leading to a project he is currently involved in, the AHRC Research Centre for Musical Performance as Creative Practice (CMPCP). David Temperley brought to his keynote his expertise on probabilistic methods of music analysis and music cognition, discussing how musicians control the flow of musical information during performance.
As an educated guess I believe I was one of very few participants who was not based in a music department (unsurprisingly for a musicology conference!) although in fact, several presenters came from multi-disciplinary research groups. The level of interdisciplinarity demonstrated in the talks did ensure that I didn't feel at all out of place academically, from my standpoint as a music informatician. Many methodologies and tools were being applied outside of their traditional domains to explore a wide range of musical detail, taking advantage of what new technologies have to offer the music researcher. Those that stood out particularly, in my memory at least, were:
To conclude - here is my presentation at this conference, looking at how we can empirically capture what it means to be creative as a musical improviser:
empirical: "based on, concerned with, or verifiable by observation or experience rather than theory or pure logic : they provided considerable empirical evidence to support their argument."
musicology: "the study of music as an academic subject, as distinct from training in performance or composition; scholarly research into music."
(Definitions taken from New Oxford American Dictionary 2nd edition © 2005 by Oxford University Press, Inc.)
Whilst musicology can traditionally be highly theoretical, this interdisciplinary conference emphasised an empirical approach, presenting a diverse range of different scientific/practical approaches to the study of music. Focussing on music performance, the conference brought together people from a variety of academic backgrounds to share knowledge and methodologies across disciplines.
The chosen two keynote speakers (Eric Clarke and David Temperley) represented two areas of the spectrum of research covered during the conference. From his standpoint as co-editor of Empirical Musicology: Aims, Methods, Prospects (2004, Oxford University Press) and as an influential musicologist in this field over many years, Eric Clarke gave a historical and critical overview of the use of empirical methods in music research, leading to a project he is currently involved in, the AHRC Research Centre for Musical Performance as Creative Practice (CMPCP). David Temperley brought to his keynote his expertise on probabilistic methods of music analysis and music cognition, discussing how musicians control the flow of musical information during performance.
As an educated guess I believe I was one of very few participants who was not based in a music department (unsurprisingly for a musicology conference!) although in fact, several presenters came from multi-disciplinary research groups. The level of interdisciplinarity demonstrated in the talks did ensure that I didn't feel at all out of place academically, from my standpoint as a music informatician. Many methodologies and tools were being applied outside of their traditional domains to explore a wide range of musical detail, taking advantage of what new technologies have to offer the music researcher. Those that stood out particularly, in my memory at least, were:
- Elaine King and collaborators used a statistical ordination technique borrowed from ecology and educational research (canonical ordination, through the software CANOCO) to cluster together data from participants to extract what students considered their main motivations to prepare for assessed performances. (useful for me as I am looking at how best to cluster large sets of data to extract key themes from the data)
- There was a (beautifully presented) talk from Tal-Chen Rabinowitch on work examining associations between musical interaction in groups of children and their emotional empathic development. To measure the children's emotional empathy, three different measures were used as a battery, of which two came from previous literature and the third was devised for this study. (useful for me as I am looking at how best to measure how creative something is)
- Mark Doffman's presentation on jazz musicians' non-verbal communication focused specifically on how different groups of musicians negotiate how to end an improvisation. His research analysed video footage to examine the communicative behaviour of different types of jazz musicians, using this analysis to examine the musical co-ordination that was happening - (I really identified with this, having more than once been in the position of jamming with other musicians, playing a piece, and having no idea how we were going to make the piece end!)
To conclude - here is my presentation at this conference, looking at how we can empirically capture what it means to be creative as a musical improviser:
Defining Creativity in Music Improvisation (presentation slides)
How is creativity manifested in improvisation? We have an intuitive understanding of the concept of creativity that we can use introspectively to suggest answers to these questions, both in theory and during performance. If, though, we want to program a computer to generate music in a creative way, the computer does not understand what creativity is. We cannot ask the computer to behave creatively unless we also give some definition of what such behaviour entails. So the problem becomes: how to define what musical creativity is to a computer.
This work uses empirical methods borrowed from linguistics to capture the words which we strongly associate with creativity. An analysis of the language used in dictionary definitions and academic papers on creativity, as compared to everyday language use, has produced a list of words which we commonly use to discuss creativity, e.g. innovation, openness, divergent. After conducting a survey on how these words can be applied in the context of music improvisation, I empirically derive key attributes of creativity in this musical domain which can be used to guide an artificially intelligent musical system towards generating creative musical behaviour.
Monday, 1 March 2010
Self-assessment of creativity
Interesting post at James C. Kaufmann's blog on our self-awareness of our own creativity:
The American Idol Effect: Why We're Not Too Good at Judging Our Own Creativity
I would interpret 'metacreativity' as being creative about creativity, rather than being aware of the extent of your own creativity. However there are some useful links in here on experiments investigating how students rate their own creativity compared to the 'expert' opinion (in these cases, the teachers). I wonder if there has been any work on people being creative outside of the student/teacher domain?
The American Idol Effect: Why We're Not Too Good at Judging Our Own Creativity
I would interpret 'metacreativity' as being creative about creativity, rather than being aware of the extent of your own creativity. However there are some useful links in here on experiments investigating how students rate their own creativity compared to the 'expert' opinion (in these cases, the teachers). I wonder if there has been any work on people being creative outside of the student/teacher domain?
Wednesday, 20 January 2010
Creativity and Cognition conference October 2009 - feedback
The conference overall: A real eye-opener for the types of research going on under the banner of 'creativity'. The conference was single-stream, meaning no picking and choosing of what papers to go to see, just one presentation at a time, which you could choose to attend (or not).
There were disappointingly few people there working in computational modelling or creative computer systems, or in the psychological processes behind creativity, and a surprisingly high proportion of people working in design.
Once I got used to the balance of papers, though, I found the conference much more useful - I could allocate intense concentration to the most relevant papers and just sit back and enjoy the other presentations and pick out some bits that were useful to me. Quite often I found that a talk which seemed completely irrelevant to my research had some quite nice general observations that fitted in with my growing ideas about how creativity is more than just producing an end product, with process, producer and the surrounding environment/audience/influences taking an important role too. (In fact one of the graduate symposium papers, by Carly Lassig, gave a really useful reference to this: a paper by Rhodes).
Graduate Symposium: This is the way for PhD students to do conferences! Organised by Celine Latulipe and John Thomas, the symposium was held the day before the main conference and was a closed session, with only the participants, organisers and invited guests present.
For me, the people there to comment (Celine Latulipe and Ernest Edmonds) were very useful to have around as Ernest Edmonds has links with Sussex and I like the way he thinks about creativity, plus Celine Latulipe is from a computer science background and, in conjunction with her grad student, had a very interesting paper on a creativity support tool evaluator.
It was a good mix of people in there, and though some talks were clearly more relevant for some people than others (reflecting the overall mix in the conference) everyone could make comments and have useful discussion. Shame the symposium was on the same day as a workshop by Linda Candy and Zafer Bilda on evaluating creativity, as I would really like to have gone to that, but that was definitely the only minor point.
Specific things to follow up after the conference:
So that's the Creativity and Cognition conference for me. Plus I met some really interesting people - hopefully some useful contacts! Didn't talk to everyone I had wanted to talk to, and didn't always manage to maintain concentration throughout the conference - by the end of the conference I was more than ready to come back to normal life, it was quite a long week! But I guess you can't talk to everyone, and do everything.
There were disappointingly few people there working in computational modelling or creative computer systems, or in the psychological processes behind creativity, and a surprisingly high proportion of people working in design.
Once I got used to the balance of papers, though, I found the conference much more useful - I could allocate intense concentration to the most relevant papers and just sit back and enjoy the other presentations and pick out some bits that were useful to me. Quite often I found that a talk which seemed completely irrelevant to my research had some quite nice general observations that fitted in with my growing ideas about how creativity is more than just producing an end product, with process, producer and the surrounding environment/audience/influences taking an important role too. (In fact one of the graduate symposium papers, by Carly Lassig, gave a really useful reference to this: a paper by Rhodes).
Graduate Symposium: This is the way for PhD students to do conferences! Organised by Celine Latulipe and John Thomas, the symposium was held the day before the main conference and was a closed session, with only the participants, organisers and invited guests present.
For me, the people there to comment (Celine Latulipe and Ernest Edmonds) were very useful to have around as Ernest Edmonds has links with Sussex and I like the way he thinks about creativity, plus Celine Latulipe is from a computer science background and, in conjunction with her grad student, had a very interesting paper on a creativity support tool evaluator.
It was a good mix of people in there, and though some talks were clearly more relevant for some people than others (reflecting the overall mix in the conference) everyone could make comments and have useful discussion. Shame the symposium was on the same day as a workshop by Linda Candy and Zafer Bilda on evaluating creativity, as I would really like to have gone to that, but that was definitely the only minor point.
Specific things to follow up after the conference:
- As I mentioned above, the reference to a Rhodes paper about the '4 Ps' of creativity looked useful (Person, Product, Press, Process) although I haven't been able to source a copy of it yet.
- A paper by Ricardo Sosa, John Gero and Kyle Jennings fitted very closely with an idea I am starting work on, about modelling a creative society using an agent based system.
- Frieder Nake's (excellent) talk on algorithmic art and creativity underlined an opinion which I come across more and more, that "Machines can never be creative". In other words, if a computer can do it, it isn't creative because we can see the processes it uses', hence a definition of creativity shifts with the times as computers do more tasks we would consider creative. I disagree with this (of course! for a PhD on evaluating computational creativity!) but really must acknowledge this debate in my work, although I don't think I want to wade into it too heavily; perhaps more philosophical tools are needed in my academic toolkit before I feel ready to tackle that kind of debate properly.
- The paper presented by Celine Latulipe's student, Erin Carroll, and a conversation with Erin afterwards, led me to look at principal component analysis and factor analysis for clustering words together in semantic categories. I'm not sure yet if this is the way to go for this type of task (and have in fact been advised against it by computational linguists!) but it's good to know about.
- Ben Shaw's talk on Emergence in design was very well presented and gave useful links back to improvisation in creativity (particularly mentioning R. Keith Sawyer's work in this area). Also Ben and I had some conversations which led to him giving me some very useful feedback on work I have done with computational linguistics methods - hopefully he found some useful things in my work too.
- Another useful presentation came from Brian Magerko and various other people at Georgia Tech (surely the most represented institution at an international conference that I have ever seen! 1 in 3 submissions from Georgia Tech, I think) This paper was talking about observing improvisation in people, with a view to replicating it in agent-based modelling - this is very closely linked to some multi-agent improvisation simulations I am starting now. While their findings caused some debate later on (particularly whether we use a stored mental model of the world or not), this paper has given some useful thinking material for me as I approach my own work in multi-agent systems.
- An interesting definition of creativity by Viveka Weiley: Creativity = 1. New, 2. Valuable, 3. 'x' - the key question here is what is the 'x' that we are missing out on if we just consider creativity to be tied into the concepts of novelty and value.
- David Norton's talk on DARCI, a computer artist trained using neural networks, was the closest to mine in terms of graduate symposium talks, and we covered a lot of common ground in our presentations. In particular the debate from Simon Colton's paper came up, on whether something was actually creative if it is perceived as creative. Good to hear about the project and it's going to be interesting to see how it turns out.
- The very last talk was a keynote address by Mihalyi Csikszentmihalyi. I have to admit from the book or two I've read of his, I wasn't really expecting the talk that he gave to be very relevant, more based in social comment and individual case studies (and with big conclusions drawn from limited findings...?) I was very pleasantly surprised - the talk was entertaining and useful, with plenty of relevant academic material including a description of attributes of creative people as representing a continuum which creative people can navigate across very deftly as needed.
So that's the Creativity and Cognition conference for me. Plus I met some really interesting people - hopefully some useful contacts! Didn't talk to everyone I had wanted to talk to, and didn't always manage to maintain concentration throughout the conference - by the end of the conference I was more than ready to come back to normal life, it was quite a long week! But I guess you can't talk to everyone, and do everything.
Tuesday, 20 October 2009
Creativity and Cognition 2009 conference, 27-30 Oct, Berkeley, CA
Off to California to attend the 2009 Creativity and Cognition conference. I'm presenting my PhD work to date at the graduate symposium on the first day then have the rest of the conference to look forward to. Will record my thoughts here after the conference.
Friday, 9 October 2009
Conceptual art
In a discussion about how you could place a value on a piece of art, I've been introduced to "conceptual art". This is when the artistic process is of far higher importance than the end-product itself; in fact the end-product becomes almost irrelevant - merely a side-effect of the process being executed.
Sol LeWitt defined Conceptual Art in "Paragraphs on Conceptual Art", Artforum, June 1967, as:
Another example is Duchamp's Fountain (1917): an exhibit of a urinal, entitled 'Fountain', by Marcel Duchamp. With this piece, Duchamp intended the focus to be on how it was interpreted as the choice for this artwork, rather than the physical object itself.
Duchamp submitted the Fountain to an art exhibition for the Society of Independent Artists, where every submission would be accepted and exhibited. Duchamp's submission sparked a debate with the judging panel (of which Duchamp was himself a member!) as to whether this was in fact a piece of art. Eventually the Fountain was included in the exhibition but hidden from sight and Duchamp resigned from the Society board.
Since then though, the Fountain has been judged the most influential modern art work of all time. Who was right, the 500 art experts who made this judgement or the panel who rejected the Fountain as not being a piece of art?
While I don't intend this post to express that the creative process is far more important than the creative product, for me this is interesting evidence as to why process is as important as product.
There is an interesting discussion on the philosophy of conceptual art at Schellekens, Elisabeth, "Conceptual Art", The Stanford Encyclopedia of Philosophy (Spring 2009 Edition), Edward N. Zalta (ed.), URL =
Sol LeWitt defined Conceptual Art in "Paragraphs on Conceptual Art", Artforum, June 1967, as:
In conceptual art the idea or concept is the most important aspect of the work. When an artist uses a conceptual form of art, it means that all of the planning and decisions are made beforehand and the execution is a perfunctory affair. The idea becomes a machine that makes the art.If assessing how creative a piece of conceptual art is, solely by evaluating the product, then there are two negative consequences:
- The primary intentions of the artist are ignored (the artist is more focussed on how the art is made than what the result is).
- The level of creativity presented will probably be underestimated, especially if the art results in producing something that might seem commonplace outside the context of that art installation.
Another example is Duchamp's Fountain (1917): an exhibit of a urinal, entitled 'Fountain', by Marcel Duchamp. With this piece, Duchamp intended the focus to be on how it was interpreted as the choice for this artwork, rather than the physical object itself.
Duchamp submitted the Fountain to an art exhibition for the Society of Independent Artists, where every submission would be accepted and exhibited. Duchamp's submission sparked a debate with the judging panel (of which Duchamp was himself a member!) as to whether this was in fact a piece of art. Eventually the Fountain was included in the exhibition but hidden from sight and Duchamp resigned from the Society board.
Since then though, the Fountain has been judged the most influential modern art work of all time. Who was right, the 500 art experts who made this judgement or the panel who rejected the Fountain as not being a piece of art?
While I don't intend this post to express that the creative process is far more important than the creative product, for me this is interesting evidence as to why process is as important as product.
There is an interesting discussion on the philosophy of conceptual art at Schellekens, Elisabeth, "Conceptual Art", The Stanford Encyclopedia of Philosophy (Spring 2009 Edition), Edward N. Zalta (ed.), URL =
Wednesday, 7 October 2009
A ground truth for creativity (or lack of)
My officemate and I had quite an interesting discussion yesterday as to whether there is a 'right' or a 'wrong' answer to give, when you assess how creative something is.
Different people come up with different assessments, based on a whole range of factors such as their expertise in that domain, their knowledge of the process by which creative products are produced, personal opinion and bias according to their tastes, and the amount of time and effort which they give to the assessment.
Part of my work involves a computational system which produces so-called creative behaviour and then assesses how creative that behaviour is, as part of feedback into refining the creative process. (I shall discuss this work in more depth in a future post, when it is further developed.)
How do I test whether the system 'works', i.e. whether its behaviour could be deemed creative and its self-assessment of its own creativity is accurate? There is no ground-truth for creativity; in fact this lack of ground-truth is the central research problem I am addressing! To test its performance against other creativity measurements is to test one theory against another rather than testing the model against real life.
So lets test the model against real life - compare its judgements of creativity to judgements made by people. If there is a large crossover and agreement between machine and people, then the machine is making a good job of approximating human assessment of creativity: the only guide we have to follow at present.
Next problem: what if my system is proven to perform well against some 'right answer', coming up with a satisfactory evaluation of the creativity present in the system - does this mean my system is also 'right'? I would say no, it is not coming up with a universally correct assessment: because no such universally correct assessment exists. We can approximate what a number of people would collectively decide about a system or item's creativity, and aim to match the consensus of opinion. More often than not, however, someone will disagree...
Different people come up with different assessments, based on a whole range of factors such as their expertise in that domain, their knowledge of the process by which creative products are produced, personal opinion and bias according to their tastes, and the amount of time and effort which they give to the assessment.
Part of my work involves a computational system which produces so-called creative behaviour and then assesses how creative that behaviour is, as part of feedback into refining the creative process. (I shall discuss this work in more depth in a future post, when it is further developed.)
How do I test whether the system 'works', i.e. whether its behaviour could be deemed creative and its self-assessment of its own creativity is accurate? There is no ground-truth for creativity; in fact this lack of ground-truth is the central research problem I am addressing! To test its performance against other creativity measurements is to test one theory against another rather than testing the model against real life.
So lets test the model against real life - compare its judgements of creativity to judgements made by people. If there is a large crossover and agreement between machine and people, then the machine is making a good job of approximating human assessment of creativity: the only guide we have to follow at present.
Next problem: what if my system is proven to perform well against some 'right answer', coming up with a satisfactory evaluation of the creativity present in the system - does this mean my system is also 'right'? I would say no, it is not coming up with a universally correct assessment: because no such universally correct assessment exists. We can approximate what a number of people would collectively decide about a system or item's creativity, and aim to match the consensus of opinion. More often than not, however, someone will disagree...
Subscribe to:
Posts (Atom)