project below- should I be learning Grasshopper & Rhino or just Rhino first?
I'm trying to panel modules with low tolerances- I've prototyped regular shapes like geodesics and am now looking to experiment with irregular shapes with lots of different panel shapes.
I understand some things are best done through Grasshopper when using Paneling Tools- I'm trying to figure out if I can do what I want to achive with PT alone or should do it through Grasshopper (or some other route).
I’m on the MAC WIP - The module was built in Sketchup - all the components seem to be in order as blocks though am having problems running the ptpanel3dcustom command - thinking maybe a bug in the WIP or something wrong with my input or that I imported the sketchup file the wrong way. (I dropped it in the window) - If the 3D command is run it doesn’t do anything - if 2D (ptpanelgridcustom) it crashes.
The tileing pattern - the green rectangle is a refrence. each tile contains 4 blocks with 3 more nested in each.
How the module tiles.
The other thing I'm trying to do is specify that most of the lines in the panels don’t bend/curve when they are paneled (or something like Cage Edited). For my purposes the length & angles can change while the lines must remain straight.
These images show a test tile to be panneled on a ellipsoid. When the tile is mapped to the grid the lines curve, this is an extreme example but notice allot of tiles far from the hemespheres are also bent slightly.
These two questions have me stumped the most for now. What should I look into get a better handle on these problem areas? Maybe I should try recreating the work on a windows machine? or perhaps I should get started with Grasshopper?
Thanks for reading.
Lu…
and export the geometry out to VVVV to render it LIVE! RawRRRR. In this case, a digital audio workstation Ableton Live, a leading industrial standard in contemporary music production.
the good news is that VVVV and ableton live lite is both free.
https://www.ableton.com/en/products/live-lite/
i am not trying to use ipad as a controller for grasshoppper. I wanted to work with a timeline (similar to MAYA or Ableton or any other DAW(digital audio workstation)) inside grasshopper in an intuitive way. Currently there is no way of SEQUENCING your definition the way you want to see that i know of.
no more combersome export import workflows... i dont need hyperrealistic renderings most of the time. so much time invested in googling the right way to import, export ... mesh settings...this workflow works for some, for some not ...that workflow works if ... and still you cannot render it live nor change sequence of instruction WHILE THE VIDEO is played. and I think no one wants to present rhinoceros viewport. BUT vvvv veiwport is different. it is used for VJing and many custom audio visual installation for events, done professionally. you can see an example of how sound and visuals come together from this post, using only VVVV and ableton. http://vvvv.org/documentation/meso-amstel-pulse
I propose a NEW method. make a definition, wire it to ableton, draw in some midi notes, and see it thru VVVV LIVE while you sequence the animation the WAY YOU WANT TO BE SEEN DURING YOUR PRESENTATION FROM THE BEGINNING, make a whole set of sequences in ableton, go back change some notes in ableton and the whole sequence will change RIGHT INFRONT of you. yes, you can just add some sound anywhere in the process. or take the sound waves (sqaure, saw, whateve) or take the audio and influence geometric parameters using custom patches via vvvv. I cannot even begin to tell you how sophisticated digital audio sound design technology got last ten year.. this is just one example which isn't even that advanced in todays standard in sound design ( and the famous producers would say its not about the tools at all.) http://www.youtube.com/watch?v=Iwz32bEgV8o
I just want to point out that grasshopper shares the same interface with VVVV (1998) and maxforlive, a plug in inside ableton. audio mulch is yet another one that shares this interface of plugging components to each other and allows users to create their own sound instruments. vvvv is built based on vb, i believe.
so current wish list is ...
1) grasshopper recieves a sequence of commands from ableton DONE
thanks to sebastian's OSCglue vvvv patch and this one http://vvvv.org/contribution/vvvv-and-grasshopper-demo-with-ghowl-udp
after this is done, its a matter of trimming and splitting the incoming string.
2) translate numeric oscillation from ableton to change GH values
video below shows what the controll interface of both values (numbers) and the midi notes look like.
https://vimeo.com/19743303
3) midi note in = toggle GH component (this one could be tricky)
for this... i am thinking it would be great if ...it is possible to make "midi learn" function in grasshopper where one can DROP IN A COMPONENT LIKE GALAPAGOS OR TIMER and assign the component to a signal in, in this case a midi note. there are total 128 midi notes (http://www.midimountain.com/midi/midi_note_numbers.html) and this is only for one channel. there are infinite channels in ableton. I usually use 16.
I have already figured out a way to send string into grasshopper from ableton live. but problem is, how for grasshopper to listen, not just take it in, and interpret midi and cc value changes ( usually runs from 0 to 128) and perform certain actions.
Basically what I am trying to achieve is this : some time passes then a parameter is set to change from value 0 to 50, for example. then some time passes again, then another parameter becomes "previewed", then baked. I have seen some examples of hoopsnake but I couldn't tell that you can really control the values in a clear x and y graph where x is time and y is the value. but this woud be considered a basic feature of modulation and automation in music production. NVM, its been DONE by Mr Heumann. https://vimeo.com/39730831
4) send points, lines, surfaces and meshes back out to VVVV
5) render it using VVVV and play with enormous collection of components in VVVV..its been around since 1998 for the sake of awesomeness.
this kind of a digital operation-hardware connection is usually whats done in digital music production solutions. I did look into midi controller - grasshopper work, and I know its been done, but that has obvious limitations of not being precise. and it only takes 0 o 128. I am thinking that midi can be useful for this because then I can program very precise and complex sequence with ease from music production software like ableton live.
This is an ongoing design research for a performative exhibition due in Bochum, Germany, this January. I will post definition if I get somewhere. A good place to start for me is the nesting sliders by Monique . http://www.grasshopper3d.com/forum/topics/nesting-sliders
…
ow the steps of the successful run when step 1.2 is bypassed (note that the and OpenFOAM session is open in the background while running the Butterfly demo file):
1. create wind tunnel, and use different parameters of (4,4) for _globalRefLevel_ as suggested by Theodoro in this post
2. run blockMesh:
3. run snappyHexMesh:
4. run checkMesh:
5. connect the case from checkMesh to simpleFOAM and run the simulation:
6. the simulation converged at 1865 iteration, but the results visualization part has some problem:
7. so I revised this part according to suggestions from Hagit:
8. and the results can be visualized for P and U values:
The GH file used for the successful run shown above is attached here.
Now, the following is the error I got when the case from the update fvScheme component is used for simpleFOAM simulation:
the warning message on the simpleFOAM component is:
1. Solution exception: --> OpenFOAM command Failed!#0 Foam::error::printStack(Foam::Ostream&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #1 Foam::sigFpe::sigHandler(int) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #2 ? in "/lib64/libc.so.6" #3 double Foam::sumProd<double>(Foam::UList<double> const&, Foam::UList<double> const&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #4 Foam::PCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #5 Foam::GAMGSolver::solveCoarsestLevel(Foam::Field<double>&, Foam::Field<double> const&) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #6 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #7 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #8 Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libfiniteVolume.so" #9 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam" #10 Foam::fvMatrix<double>::solve() in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam" #11 ? in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam" #12 __libc_start_main in "/lib64/libc.so.6" #13 ? in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam"
The error message from the readMe! output node is attached below as a text file.
Hope you can kindly advise what the important steps or parameters I might have missed here. I assume it might be related to OpenFOAM rather than with the Butterfly workflow...
Thank you very much!
- Ji
…
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
using Grasshopper.Kernel.Data; using Grasshopper.Kernel.Types;
using System; using System.IO; using System.Xml; using System.Xml.Linq; using System.Linq; using System.Data; using System.Drawing; using System.Reflection; using System.Collections; using System.Windows.Forms; using System.Collections.Generic; using System.Runtime.InteropServices;
/// <summary> /// This class will be instantiated on demand by the Script component. /// </summary> public class Script_Instance : GH_ScriptInstance { #region Utility functions /// <summary>Print a String to the [Out] Parameter of the Script component.</summary> /// <param name="text">String to print.</param> private void Print(string text) { /* Implementation hidden. */ } /// <summary>Print a formatted String to the [Out] Parameter of the Script component.</summary> /// <param name="format">String format.</param> /// <param name="args">Formatting parameters.</param> private void Print(string format, params object[] args) { /* Implementation hidden. */ } /// <summary>Print useful information about an object instance to the [Out] Parameter of the Script component. </summary> /// <param name="obj">Object instance to parse.</param> private void Reflect(object obj) { /* Implementation hidden. */ } /// <summary>Print the signatures of all the overloads of a specific method to the [Out] Parameter of the Script component. </summary> /// <param name="obj">Object instance to parse.</param> private void Reflect(object obj, string method_name) { /* Implementation hidden. */ } #endregion
#region Members /// <summary>Gets the current Rhino document.</summary> private readonly RhinoDoc RhinoDocument; /// <summary>Gets the Grasshopper document that owns this script.</summary> private readonly GH_Document GrasshopperDocument; /// <summary>Gets the Grasshopper script component that owns this script.</summary> private readonly IGH_Component Component; /// <summary> /// Gets the current iteration count. The first call to RunScript() is associated with Iteration==0. /// Any subsequent call within the same solution will increment the Iteration count. /// </summary> private readonly int Iteration; #endregion
/// <summary> /// This procedure contains the user code. Input parameters are provided as regular arguments, /// Output parameters as ref arguments. You don't have to assign output parameters, /// they will have a default value. /// </summary> private void RunScript(bool bake, List<GeometryBase> G, Point3d L, Color C) { COL = C; LOCATION = L; NAME = ""; pnts.Clear(); crvs.Clear(); breps.Clear();
foreach(GeometryBase geom in G){ switch(geom.GetType().Name){ case "Point": pnts.Add(((Rhino.Geometry.Point) geom).Location); break; case "Curve": //create a new geometry list for display break; case "PolyCurve": crvs.Add((PolyCurve) geom); break; case "Brep": breps.Add((Brep) geom); break; default: Print("Add a new case for this type: " + geom.GetType().Name); break; } }
if(bake){ Rhino.DocObjects.InstanceDefinition I = doc.InstanceDefinitions.Find(NAME, false);
if(I != null) doc.InstanceDefinitions.Delete(I.Index, true, true);
int index = doc.InstanceDefinitions.Add(NAME, "description", Point3d.Origin, G); doc.Objects.AddInstanceObject(index, Transform.Scale(L, 1)); } }
// <Custom additional code> //GEOMETRY Lists to display
List<Point3d> pnts = new List<Point3d>(); List<PolyCurve> crvs = new List<PolyCurve>(); List<Brep> breps = new List<Brep>();
string NAME; Point3d LOCATION; int THICKNESS = 2; Color COL;
//Return a BoundingBox that contains all the geometry you are about to draw. public override BoundingBox ClippingBox { get { return BoundingBox.Empty; } } //Draw all meshes in this method. public override void DrawViewportMeshes(IGH_PreviewArgs args) {
}
//Draw all wires and points in this method. public override void DrawViewportWires(IGH_PreviewArgs args) { foreach(Point3d p in pnts) args.Display.DrawPoint(p, Rhino.Display.PointStyle.ControlPoint, THICKNESS, COL);
foreach(PolyCurve c in crvs) args.Display.DrawCurve(c, COL, THICKNESS);
foreach(Brep b in breps) args.Display.DrawBrepShaded(b, new Rhino.Display.DisplayMaterial(COL));
args.Display.DrawPoint(LOCATION, Rhino.Display.PointStyle.ActivePoint, 3, Color.Black); args.Display.Draw3dText(NAME, Color.Gray, new Plane(LOCATION, Vector3d.ZAxis), THICKNESS / 3, "Arial"); }
// </Custom additional code> }…
ails.
Some word about the mesh... (see Image_01)
I took a flat 4 points NURBS surface as imput (very easy, it defines the total area of my pavilion) and some points (that defines the contact with the ground).
Then I extracted a grid of points from the NURBS (Surface_Util_Divide surface) and compared 'em with the contol points, in order to associate to each grid's point its own attractor (Vector_Point_Closest Point).
Than I moved the points down. I used the distance from each point to its attractor (inverted) as amplitude for the vector of the movement, in order to say: the nearer you are to the control point, the more intense your movement will be. During this operation I've passed the distances' data list into a graph mapper (Params_Special_Graph Mapper), in order to regulate in a very intuitive and interactive way the shaping of my canopy.
At the end of the process I asked GH for a simple Delaunay mesh (Mesh_Triangulation_Delaunay Mesh). It's a very cool command, I believe!!!
Ok, now some word about the component, it's design and it's repetition/adaptation to the mesh...
(see Image_02)
I took the mesh and extracted components on first and faces's information on second. Then I selected and separated the vertexes (1°, 2°, 3°) of each triangular face into threee well defined list.
Then I re-created the triangles' edges. Please pay attention because it's not the same if you use output information from Delaunay components, because here we need a justapposition of edges where triangles touches each others.
After this work I joined the edges and found their centroid. At the same time I found the mid point of each edge.
Now the component... (see Image_03)
It' a little bit longer to describe: I'll try to be synthetic.
Substantially it is a loft from a curve to a point, repeated three times for each triangle (Surface_Freeform_Extrude Point). The point is an elevation of the centroid of the triangle (you can choose if the exstrusion has a single height or it's related to an attractor. In my case it was fixed). The curve is combination of things. There's an arch, which starts on the edge (there's an offset from the corner) end terminates on the same edge (on the other side, obviously). While it's generation the arch passes through a third point which belong to another segment. This last connects the mid point of the original edge (base triangle) with the centroid. The result is a kind of polyline, with two segments and an arch. If you go back to the image of the component that I posted probably you'll understand what I'm saying better than with the definition.
The posit…
o it would cause troubles with unfolding and fabricating... that's why I used Extrude point component- it will give you similar result, but all surfaces are planar.. you can control extrusion direction with a tip point in rhino...
2)I changed tagging so every tube has 8 points form list A and 8 points from list B... first number of tag is a number of point within one tube... last number of the tag is order of tubes (I draw a little picture in GH, hope you'll understand)...I think original way of tagging wasn't really usefull.. but you can change tagging by yourself...
3) the definition is really messy, sorry about that, but it's just quite complicated task...
4)if you find some incorrect order of tagging, use the slider that controls Shift List component ... it will shift tagging..
5) if you won't be using this definition or find some better way, pleeeease don't tell me - I'll jump out the window :D ... it took me whole day to make it work :D
6)I can't guarantee you anything- I hope it works, but if not - at least I tried... so check everything (especially order of tags and points) twice before you fabricate it.. or print few tubes and make them paper first..
7)there is a part of original definition, that is not useful anymore.. I left it there, but you can delete it (I called it "UNUSED PARTS OF ORIGINAL FILE")
..good luck
Dimitri…
ou mean by 'Activate Direct Rhino Modifying'. Perhaps you could expand?
I like the idea of mixing and matching script and 'direct' modeling. There seems to be a lot of potential platforms for this:
1. Implict History: Is there a way for GH to read the direct modifications (with History activated) and translate this as a component (or cluster of components?)? IH seems to record the UI events and the associated elements. GH would need to write as well as read the IH info, in order to preserve as much flexibility downstream as possible. You mentioned Houdini. H seems to record all 'implicit' or direct mods, done via the CAD mouse-based UI, in its network graph. Maybe, this should be captured in the IH cluster/component mentioned above.
2. RhinoParametrics: RP has done a lot of work to intercept and translate Rhino commands into its version of Implicit History. Seems to be centred on points, which makes sense as so much of the traditional 'dumb' way of inputing CAD info is based on mouse clicks on screen (points) predicated by commands, active locks, workplanes etc.
3. Gumball: Rubberduck's use of the new Gumball tool to capture 'direct' modeling inputs thru the Gumball points to a good source for capturing this kind or input, that is related to the 'macro recorder' approach taken by RP and IH.
4. The new Geom Cache component seems to be able to preserve a lot of info about the baked object. There may be even a way to read tagged info generated both GH baked with the "reference" object, and external to GH (by IH, the gumball or even third party apps like RP).
Would be interesting to know what kind of info is 'preserved'. Houdini seems to have a pretty consistent approach to geometric data, that seems to allow parallel NURBS/subD/mesh versions of the geometry. It also seems to have a coherent heirarchical approach to vertices/edges/loops/faces etc that allows the subelements to be arbitarily grouped for 'direct' modeling, and still be part of a procedural script.
I guess the polygon / mesh approach to geometry lends itself to this. If all the procedural commands/components all understand mesh geometry in either vertex, edge, face format, then combining direct and script modeling is doable in transparent way?
In your example above, the Geo Cache node 'flattens' the object to dumb geometry which is manipulated using Rhino, then used as a Reference object, in the next section of the graph. I guess there is nothing to stop the follow on components reading the precedenting graph for parameters, for additional intelligence?
Does GH 'get' or 'put' parameter data?
…
hopper and the GH file.
2. There is a drop down menu at the top of Pure Data that reads "Media". Click on "Midi". If your device connection is working, you should see it show up as an option. Set the device to MIDI in. You don't really need to set a MIDI out unless you are planning to send messages back to the device (not sure why you would want to).
3. The boxes labeled "ctlin" with a number are the Control Change in's. In Pure Data go to the "Edit" menu and click on "Edit Mode". Click on one of the "ctlin #" boxes and change the number to match the Control Change number of your physical controller. Mine starts with 5 in the upper right and goes to 65. Each control change number shows up on the display window of my device when I use it which made it easy.
4. Continue this process for all your controls. Delete the unneccesary "ctlin #" boxes by selecting them with a fence and clicking "delete". When you hover over one of the wires you should see and "x". Press the "backspace" key to delete it.
5. Now go down to the "pack f f f ..." box. There should be as many "f" or "floats" in that box as there are you number of controllers. Delete the remaining "f".
6. Next look at the box below that reads "send /0...". Make sure to keep the "/0". If you delete the "/" it will crash Grasshopper. Change the number "5" to match your first control change number. Leave the $numbers alone. You'll want to keep them sequential. Continue change the control change numbers to match all of yours. The $numbers should match the order in which you wired each controller to the "pack f f f..." box.
7. For testing purposes hover over the input on the upper let of the "print" box and connect it to the out of the "send" box. If everything is mapped correctly, working properly, and you go back to the "main" PD window you should see a list of all controllers will a value (0 to 127) next to it. As you turn a knob, the value next to the control change number will increase from 0 to 127. This will give you a good indication of whether or not everything is working and if you mapped it correctly.
8. Click on the "connect OSC" box. You might need to exit out of "edit mode" and back to "performance" mode in the PD canvas.
9. Go To Grasshopper. If everything is working you should see the Panel read "new message" when you turn a knob. At this point it should be pretty obvious how to modify the Grasshopper components. I've tried to keep everything as consistent as possible. Since I filtered out the "/0", the "explode data treat" component starts at 0, the numbers are shifted down by 1.
I just left the IP address, etc. alone on the gHowl UDP component. Just make sure the "port number" matches the OSC port number on the send in Pure Data. If you crash, you may need to choose a new number.
Hope that helps. Let me know if you have any questions. If your computer is not recognizing your midi controller, you may need to install "Midiyoke". I did at first, but it turns out I didn't need it after all.
Best of luck.
…