Workshop Overview
Part One
In the first part of the workshop, participants will develop new instrument designs using the DALL-E 2 text-to-image generator. The workshop facilitator will introduce the program and show how to effectively use the ‘prompts’, ‘variations’ and ‘out painting’ functions. Participants will have the time to make several different prototypes of new instruments and will be able to refine their images.
Participants will also be invited to bring images of their own pre-existing instrument designs to the workshop to generate variations of their designs or augment their designs with contributions from AI. This offers a chance for participants to experience real-world uses of DALL-E for instrument designers within an exploratory space.
Part Two
In the second part of the workshop, participants will be asked to compose a short audio or MIDI demo in their chosen DAW to accompany their generated AI images to imagine what their instrument would sound like if brought to the physical world. These ‘instrument demos’ will be shared with the group to discuss perceptions of the instrument design, assumed interaction and potential fabrication methods. This will be followed with an activity to imagine how users would interact with their instrument and propose a speculative mapping strategy between the interface and their sound design.
Workshop Outcomes
The aim of these activities is to question how AI text-to-image programs can:
Generate new ideas for novice and experienced instrument designers
Engage end users into instrument design process as co-designers
Help instrument designers develop their sound design
Generate variations of pre-existing instruments
Develop how we visually present new NIMEs and
understand the assumptions that come from their design aesthetic.
Last updated