algorithmic modeling for Rhino
So i am working on a problem that takes some tinkering outside the box... I am trying to send a sequence of instructions (midi) from an audio software to grasshopper via VVVV to animate the definition and export the geometry out to VVVV to render it LIVE! RawRRRR. In this case, a digital audio workstation Ableton Live, a leading industrial standard in contemporary music production.
the good news is that VVVV and ableton live lite is both free.
i am not trying to use ipad as a controller for grasshoppper. I wanted to work with a timeline (similar to MAYA or Ableton or any other DAW(digital audio workstation)) inside grasshopper in an intuitive way. Currently there is no way of SEQUENCING your definition the way you want to see that i know of.
no more combersome export import workflows... i dont need hyperrealistic renderings most of the time. so much time invested in googling the right way to import, export ... mesh settings...this workflow works for some, for some not ...that workflow works if ... and still you cannot render it live nor change sequence of instruction WHILE THE VIDEO is played. and I think no one wants to present rhinoceros viewport. BUT vvvv veiwport is different. it is used for VJing and many custom audio visual installation for events, done professionally. you can see an example of how sound and visuals come together from this post, using only VVVV and ableton. http://vvvv.org/documentation/meso-amstel-pulse
I propose a NEW method. make a definition, wire it to ableton, draw in some midi notes, and see it thru VVVV LIVE while you sequence the animation the WAY YOU WANT TO BE SEEN DURING YOUR PRESENTATION FROM THE BEGINNING, make a whole set of sequences in ableton, go back change some notes in ableton and the whole sequence will change RIGHT INFRONT of you. yes, you can just add some sound anywhere in the process. or take the sound waves (sqaure, saw, whateve) or take the audio and influence geometric parameters using custom patches via vvvv. I cannot even begin to tell you how sophisticated digital audio sound design technology got last ten year.. this is just one example which isn't even that advanced in todays standard in sound design ( and the famous producers would say its not about the tools at all.) https://www.youtube.com/watch?v=Iwz32bEgV8o
I just want to point out that grasshopper shares the same interface with VVVV (1998) and maxforlive, a plug in inside ableton. audio mulch is yet another one that shares this interface of plugging components to each other and allows users to create their own sound instruments. vvvv is built based on vb, i believe.
so current wish list is ...
1) grasshopper recieves a sequence of commands from ableton DONE
thanks to sebastian's OSCglue vvvv patch and this one http://vvvv.org/contribution/vvvv-and-grasshopper-demo-with-ghowl-udp
after this is done, its a matter of trimming and splitting the incoming string.
2) translate numeric oscillation from ableton to change GH values
video below shows what the controll interface of both values (numbers) and the midi notes look like.
3) midi note in = toggle GH component (this one could be tricky)
for this... i am thinking it would be great if ...it is possible to make "midi learn" function in grasshopper where one can DROP IN A COMPONENT LIKE GALAPAGOS OR TIMER and assign the component to a signal in, in this case a midi note. there are total 128 midi notes (http://www.midimountain.com/midi/midi_note_numbers.html) and this is only for one channel. there are infinite channels in ableton. I usually use 16.
I have already figured out a way to send string into grasshopper from ableton live. but problem is, how for grasshopper to listen, not just take it in, and interpret midi and cc value changes ( usually runs from 0 to 128) and perform certain actions.
Basically what I am trying to achieve is this : some time passes then a parameter is set to change from value 0 to 50, for example. then some time passes again, then another parameter becomes "previewed", then baked. I have seen some examples of hoopsnake but I couldn't tell that you can really control the values in a clear x and y graph where x is time and y is the value. but this woud be considered a basic feature of modulation and automation in music production. NVM, its been DONE by Mr Heumann. https://vimeo.com/39730831
4) send points, lines, surfaces and meshes back out to VVVV
5) render it using VVVV and play with enormous collection of components in VVVV..its been around since 1998 for the sake of awesomeness.
this kind of a digital operation-hardware connection is usually whats done in digital music production solutions. I did look into midi controller - grasshopper work, and I know its been done, but that has obvious limitations of not being precise. and it only takes 0 o 128. I am thinking that midi can be useful for this because then I can program very precise and complex sequence with ease from music production software like ableton live.
This is an ongoing design research for a performative exhibition due in Bochum, Germany, this January. I will post definition if I get somewhere. A good place to start for me is the nesting sliders by Monique . http://www.grasshopper3d.com/forum/topics/nesting-sliders
So I had a meeting with sebastian Oschatz in frankfurt, one of the founder and developer of VVVV.
the answer is that what I was looking is comparable to travel constantly back and forth between three different universes (vvvv, grasshopper, and ableton live (max/msp)) where they speak different languages. how many people do grasshopper? is it majority now or...no, probably not. and then ableton users...how many people know and use grasshopper AND ableton? yep.. not so many. minority within minority. who can benefit from it? thats the question.
with his extensive knowledge of vvvv, he made quite clear that there was in fact NO need to bring in ableton live, although by using UDP, it would be possible to use multiple computers communicating via wran or whateve, UNLESS I am producing and design sound that shapes form or organize matter.
so i am looking to different ways to interpret sound, like ECHO nest. tempo, structure, keys...everything. NOW on top of that, I continue to look into how to organize matter with sound, physically, because there is a way I know :), and if it is possible to simulate and write an algorithm for this... then the geometry of form will communicate with nuances in reverberation time and filter envelops from sound design. but this is a major rabbit hole...but who knows I am Alice.
timeline and sequencing is totally possible just within vvvv, as it turns out. I thought of taking advantage of the advanced ableton live interface but it seems more of can of worms right now. and I am almost certain that it is possible to record the streaming video within vvvv, so that it isn't only for live demo in a kitchen but also for higher profile presentation.
So I am solid STILL, on bringing VVVV and grasshopper closer together. I am looking into tranferring not only points, but vector, mesh, brep, surface, polysurface, string values, to and fo vvvv and GH. So far I have points... But if this done, then NO MORE SCREEN capturing viewports for animation, and it would be possible to go from changing the main reference surface of a design (ex footprint) then showing the revision demo of how interior and exterior lighting design schemes works the same evening, in a very experiential and narrative way than tradiational animation. IT COULD BE that I only present with VVVV rather than pdfs or slides. SO I will keep you all posted...
1) midi goes 0 to 128. giving only so many steps.
2) grasshopper is not able to process so many changes in values so quickly. I am not sure if it is a good idea to visualize all the solutions because that is a big performance draw back, resulting wasted time and filling up local disk.
3) more of a conceptual problem. what is the use?
If the purpose of this exercise is to look through the population of possibilities, there are much more efficient ways to do so actually, as it turns out. Its called visual query builder.
Check out this research. Nowadays I am inclined to think something like this would be more of an ideal playmate for any parametric design research, where the goal is to search a dataset with an intention to shrink the dataset without actually visualizing the geometry on the way. (From min 8:00)