| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 1.04 MB | Adobe PDF |
Orientador(es)
Resumo(s)
MuSyFI is a system that tries to model an inspirationalcomputational creative process. It uses images as sourceof inspiration and begins by implementing a possibletranslation between visual and musical features. Resultsof this mapping are fed to a Genetic Algorithm (GA)to try to better model the creative process and producemore interesting results. Three different musical artifacts are generated: an automatic version, a co-createdversion, and a genetic version. The automatic versionmaps features from the image into musical features nondeterministically; the co-created version adds harmonylines manually composed by us to the automatic version; finally, the genetic version applies a genetic algorithm to a mixed population of automatic and co-createdartifacts.The three versions were evaluated for six differentimages by conducting surveys. They evaluated whetherpeople considered our musical artifacts music, if theythought the artifacts had quality, if they considered theartifacts ’novel’, if they liked the artifacts, and lastly ifthey were able to relate the artifacts with the image inwhich they were inspired. We gathered a total of 300answers and overall people answered positively to allquestions, which confirms our approach was successfuland worth further exploring.
Descrição
UIDB/00693/2020
UIDP/00693/2020
Palavras-chave
Computational Creativity Music Generation Genetic Algorithm Inspiration Feature Translation
Contexto Educativo
Citação
Editora
Association for Computational Creativity
