Mood - Meeting 1
I. Notes from meeting with Professor Winston and Sajit Rao.
- Expectation of what to listen to with patterns of attentional state
-> still perceive even if fragmented / other cord
Sajit Advise:
- Show in our project why attentional state useful / better than, say, neural nets
- Find invariant about Jazz / Classical in changes of attentional state that sticks out.
- Map operations in vision to operations in sound - what is selection?
- Think hardware support -> What in our brain could be a primitive for sound attention?
II. April 4th meeting:
Focus project
- We only have 1 month ` get a feasible project
- single instrument = piano single handed
- maybe instead input = midi file
- Project could be: Play 20 notes and let the program figure out what type of music it is.
Experiment with
- attentional state
- neural net learning
- arch learning
=> figure out which does better
What to base our system inputs on:
- notes (pitch)
- difference in notes (d(pitch) / dt)
- past 3 notes / seconds, for increasing values of 3
-> that is, experiment with different sizes of past window
- volume of note (how strong)
- d(volume) / dt
- speed of notes
- d(speed) / dt ` how the speed of playing changes with time
What to come up with
- Weighting of music will be a 4-vector
(ex: [20, 0, 0, 2] on the [Jazz, Classical, Romantic, Modern] scale)
III. Questions:
- How to train the program
- How does it learn the first pattern
- How can it do multiple learning simultaneously
The ideal here would be that the program learns more than one thing at
a time, can be presented to it many different input patterns, and then
itself woul How to do this? I'll send separate them into different
types of music.