ur setup. Can you say what sensor you are using? Are you using an Arduino to write this ascii information to the serial port? If so, there may be some formatting code for the string that you'll need to do to get the Read component to function properly. I see that you were able to open the port and Start reading... so my first thought is that the data is formatted correctly....
All of the read components look for a specific character (in this case two characters) to indicate when it has reached the end of the line being read and should spit out the data. In this case, Firefly uses the Carriage Return (\r) and Line Feed (\n) to know when it has reached the end of the line. In arduino, these are automatically added to any line if you use the Serial.println("blah, blah, blah"); command. Notice, this is different from the Serial.print("nothing to see here"); command. This doesn't mean that you can't still use the regular print command... it's just you need to use the println command to indicate when you've reached the end of the line. Let's take a look at a simple example.
void setup() { Serial.begin(9600);}void loop() { int sensorValue = analogRead(A0); Serial.print("The value of the sensor is: "); Serial.println(sensorValue);
delay(20); // important to wait some small time so you aren't sending just a ton of info over to GH which will cause it to crash :(
}
The first print statement prints a string to the serial port... and the next one adds the current sensor value... and THEN adds the carriage return and line feed to start a new line. The nice thing about using these together is that you can concatenate any type of data you want. If you were to upload this sketch, you should see a sentence being printed to the serial port that says "The value of the sensor is: 512". I made up the number, but you get the idea. Notice, I also had to include a delay function. You don't always need this (there are other ways to go about this) but the important thing to note is that the loop cycle on the Arduino can run really fast. I mean... really fast. So, you wont want to send so much data over to GH, because this could flood the string buffer in the Read component and cause it to crash (eventually). It's a good idea to add some small time interval just to slow it down a bit. I should say that I've optimized the refresh rate in the next release so it's significantly faster... so hopefully this wont be as big of a problem... but hopefully that helps some.
Now... Why are you writing data to a sensor? Sensors by default are considered inputs... so I'm quite confused as to why you would want to send data back (if you are... then you need some way to handle the string data being sent from GH... this is the whole reason we built the Firefly firmata... it sets up the two-way protocol so you don't have to deal with all of that mess... If you're going to read and write, you're better off just uploading the firmata and using the Uno Read and Write components). Also, I'm not very familiar with the Hyperterm or Advanced Serial Port Terminal... but I will say that could get COM conflicts if you're trying to open the port with different tools. Anyway, I hope some of this helps you get up and running.
Cheers,
Andy
…
ne. Though I suppose providing a help file which lists some useful tricks for some operations would be a good place to start.
It would be possible to add persistent undo to Clusters, and it wouldn't even be that difficult. Adding undo data into the GH file is something I've been meaning to add since the first day of undo/redo, and the plumbing is in fact there, but it was never fully hooked up. I will definitely try this for GH2. And I'll also have a think about how to implement version history for clusters.
Phew, my brain hurts even just to think about this. I suppose step one would be to write a clever merge algorithm for two files that have some things in common and some not. But even that will be tricky as heck.
This is a major problem. First of all, running the solver in a thread and keeping the UI alive will only slow things down even more. On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.Multi-threading will be something I'm going to try and implement in GH2, but there's only so much I can do. If you run a solid boolean operation on a boatload of shapes, it's a single operation that is performed inside Rhino and there's nothing I can do to make it run on multiple threads. This is in general an issue, sometimes it takes a long time because there are many operations to perform; like offsetting 2500 curves. I can probably multi-thread that provided the Rhino curve offsetter is thread-safe. However stuff may also take a long time because there is a single operation (like the aforementioned huge solid boolean).Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
What would you do with a solver which runs in the background? How does it differ from only running solutions when you want to? Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data. Wouldn't it be just as effective to disable the solver, make all the changes you want to make, then press F5?
Just because something runs in a thread doesn't mean you can shoot it in the head any time you want without consequences. Aborting threads typically means setting a boolean somewhere and then letting the thread commit suicide, while performing all the necessary cleanup. If you just destroy a thread there's no saying in what state you leave the memory.
I think a good place to start with these sort of problems is to keep on improving clusters, add more flexible structuring UI such as Layers or Filters or Pages or whatever to the canvas, add ways to share data between remote parts of a file without suffocating the display with wires, and to provide easy ways to temporarily disable parts of a file (think of it as Clipping planes for GH). That way you can make local changes and see local effects before solving the entire file again.
I'm certainly impressed by the sheer extent of the file you people made, it will be a lovely test case for UI improvements.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 3:34am on September 4, 2013
er" logic but it miss when comes the copy or offset.
Here is my following logic
Take the square of 25 m x 12 m ; make it a surface
I divide it in "blades" of 20 cm
I take the edges of the "blades",
I divide this edges in 40 points (or equivalent) (A)
I identify my curves (curves) which are on the floors, which are curves (B)
First i do this "test" :
for each crossroad between A and B, i make a circle of X cm (slider) of diameter and the rule is the following :
* In this circle, the future movement of my A curve must be at Z = 0
Second step :
for each next point, i have to : leave a copy on Z = 0 and rise the second one for a heigh of Y cm (slider) from the ground.
the next (W = slider to chose every each number of point, i decide to do the following point) point, which is a little bit farer from the previous point, must duplicate the same height of Y ; and also be copied to Y + Y cm.
There is a Z number (slider) which is the max height possible for these points, which mean that the next point must be at this very same level except ... The third step scenario.
The purpose is to be able to have flat area, like step in a stairway.
Third step :
The grasshopper must test if the A points are between two or more "area at Z = 0". Why ?
The goal is to obtain something like screen "side view" if there are two starting points at Z = 0.
Which also mean that if there is an odd number of points, the remaining odd number must be at the top of the "stairs"
At this point of the grasshopper, we might be able to obtain, thanks to the sliders the "staircase form" regarding :
- The size of the test circle between A and B curves
- The "footstep" of each points (height)
- The number of points before a "copy of the point + the next footstep rise"
- The max heigh possible for all the point off B curves
And at this moment i have a new problem in my logic. You will get my idea, but it might be wrong as well...
Therefore, and after that, we should be able to link every point by a straight line.
To fillet with P (angle) a line with the following one
To join all the line of a same B curve
To cut it at the center of each circle at Z = 0 (the crossroad of A and B)
To offset it with Q (distance)
To rise a line from the center of each circle at Z = 0
To cut the extra part of each Offset"ed" curve to get an offset curve "aligned in Z" with the original one.
To create loft the original and offset"ed" one
To extrude the surface to a distance of R
And grasshopper "should be done" because, i will duplicate it for the ceiling, reverse the form with a -Z vector to the Y value and modifie my Z in Z' to modify my max height
Could you help me ?
…
ly one (Cost of the structural material in my case) and penalize the individuals that not satisfy the structural verification by multipliyng the cost for that iteration for a factor 10. This seem to work really good, infact I obtained a convergence of the results in a specific area and number of beams.
Now, I've to modify something because the thickness of the insole, tend to minimum of the range (only because it's the most expensive material in my case), despite the validation of structural verification that is satisfied with the maximum height of the beams.
I'm expecting a insole thickness about 20-30 cm and beams height less that the maximum. I increase the range of the thickness insole to a minimum of 20 cm, but I hope the solution tend to a larger value.
Do you have some suggestion in this case?
Your post was really helpful, thank you so much again for the perfect explanation!
Leonardo…
aph relaxation in 3D and more). There is much more already in our GitHub repos and more to be added. For getting an idea of our future direction check this lecture out. For getting a better understanding of graphs and graph theory watch this lecture and this lecture on a gamified spatial configuration process. Stay tuned for more and do not hesitate to post Python questions in the meantime.
ps. If you are having installation problems, please check the remedy suggested below:
Comment by Iman Sheikhansari on August 26, 2019 at 8:33amDelete Comment
HiIf you are encountering a problem with rhino 6 versions don't worryFollow these steps.1. Download SYNTACTIC from https://sites.google.com/site/pirouznourian/syntactic-design2. Install it and go to the installation folder, Drag & drop SYNTACTIC(green one) over your grasshopper canvas.3. Close your rhino and reopen it. 4. Type GrasshopperDeveloperSettings5. Tick the Memory load *.GHA assemblies using COFF byte arrays option6. Run grasshopper and enjoy plugin
…
gap as for a 20 meter gap, it's not a good argument.
I fully concede that not every single thing may be backed up by logic. There are simply too many design decisions to make and not enough time to make them rigorously. And I do believe there is place for human intuition and art in architecture, but I also think that artistic (or intuitive, or emotional) considerations should clearly be labelled as such.
When Le Corbusier designed the urban layout of the city of Chandigarh he used his intuition to distribute the buildings and clusters. His intuition however was grounded in European climes and it failed him in India. On hot days it becomes almost impossible to walk the distance between them. Would Chandigarh have been a better place if the maximum distance was defined by the largest walkable distance on the hottest day of the year instead of the unjustifiable intuition of the designer? I suspect it would.
Furthermore, I believe that architects - student and professionals alike - regularly make formal decisions according to their aesthetic judgement. To suggest that students aren't qualified to make a design decision during their studies because they think it's formally successful seems exceedingly stingy;
There are plenty of rational decisions which are made by tacit processes. People can become very good at mimicking rational behaviour using intuition. And -as I said- if you are an architect with a distinguished career; if you've already proven yourself to be capable of good design then there comes a point where your intuitions can be trusted (to an extend).
But students whose every design has always been virtual, who have not been able to evaluate their decisions by a follow-up study, I don't see how anybody can trust their instincts. Instincts aren't just sitting in someone's brain, they are cultivated by relentless exercise and trial-and-error. Until you actually build something there is no error, only trial, and virtual trial at that.
I find architects' attempts to justify what are obviously decisions based on formal taste using other means often taking the same form of obfuscation that makes architects appear to be intellectual charlatans to specialists in other fields.
I fully agree here. If there are non-communicable aspects to a design, just say that. There's no shame in it as long as you're honest about it and have considered -however briefly- the consequences in case you're wrong.
I'm by no means advocating that all architects must master every detail in their work. Rather, that architects have at least a generalist's working knowledge of materials and construction systems. Floors don't levitate, and windows require depth; rules of thumb count as vital knowledge.
I think we're on the same page here. If you want to make a physical building, then there's more to it than pure design. Engineering comes into play. I don't mean to imply that engineering doesn't require creativity or even artistic intellect, but it is a different kind of problem-solving.
I fully agree with your fourth point. I just wasn't sure what performance-driven meant.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 4:19pm on August 14, 2013
si à faire le tri avec Grasshopper et l'outil Points in Brep, comme je pensais. Je suis passé d'environ 400 000 points à uniquement 20 000 points autour de mes 3 rails. C'est très efficace (mais un peu dangereux avec tous ces points).
J'ai interdit au composant CircleFit de faire un cercle, s'il n'y a pas au moins 5 points présents sur la section. Car lorsqu'il y a seulement 3 ou 4 points, il suffit qu'il y en ait un pour que le cercle soit faux, alors qu'au delà, le cercle a plus de chance d'être "bon".
J'ai également créé des "Pipe" (créés à partir de portions de l'axe) au lieu des "Box » de sélection des points pour éviter de sélection trop de points que ne serait pas des points du rail.
J'ai ensuite créé des « panel » pour la moyenne des distances en X et en Y et la moyenne des distances centre à centre.
Tout cela fonctionne bien avec un axe et un tuyau. Mais maintenant j'essaie d'appliquer ça à plusieurs rails en même temps. Je crois avoir compris qu'il faut créer des « path » dans l'imput manager, et faire correspondre le « path » de l'axe et celui du Tuyau.
Dans mon exemple j’ai mis 3 courbes et 21 sections. Au moment où j'utilise les boîtes pour créer les portions des axes, il crée 63 « sous-path » de 1 courbe alors qu'il faudrait qu'il crée 3 "paths" de 21 courbes, enfin si j'ai bien compris.
Car une fois qu’il a créé les points à l’intérieur des « Pipe », il doit les projeter sur les plans correspondant. Et c’est là que le problème se voit. Il ne fait pas correspondre les points à projeter et les plans.
Je vous envoie la version à une courbe et un tuyau (c’est la v5 avec un fichier rhino ou la courbe d'axe est "bakée" pour pouvoir faire un zoom sur la zone plus rapidement) et je vous envoie également, celle avec 3 courbes et 3 tuyaux. Sachant qu’il faudra également attribuer un rayon pour un des tuyaux et un autre rayon pour les deux autres.
Tout ça est bien compliqué, j’espère que je ne vous embête pas trop.
Merci d’avance.…
onsider:
Identify the aspect of calculations that consumes the most amount of time and resources: Based on what I have understood till now about the parametric workflow within the Grasshopper environment I don’t think it is Rhino/Grasshopper that consumes the maximum amount of time/resources (unless you are handling complex geometry and using native rendering). So, if you could identify the part of your iterations that consumes the maximum amount of resources we can look into parallelizing/optimizing that. It could be something like (RhinoModelling-15%, E+-40%,Radiance-45%)… If there is no way to keep track of that right now in Grasshopper, let me know, I might be able to write a custom script that records the timestamp for each part of the calculation.
Parallelizing Grasshopper: I have no idea of how to do this so I think the best resource/forum would the Grasshopper/Honeybee discussion board. I think at the very least, to make Grasshopper run on remote computers, you’d have to install Rhino/Grasshopper on those computers as well.
Parallelizing EnergyPlus/Radiance: Based on what I understand from reading Mostapha’s source code and also talking to him on this issue, Honeybee typically creates batch files ie radiance or e+ instructions which are then used to run EnergyPlus and Radiance. Radiance runs can be parallelized to a great extent, however, owing to the modular nature of how calculations are setup for grid point calculations , image rendering and some of the new matrix based calculations, there is no single answer to parallelizing Radiance calculations. One can look into optimizing a certain type of calculation and then code instructions for implementing those. E+, which I have only been using for the past month or so, doesn’t seem to have a native way of setting up parallel runs. One can, however, set up multiple separate runs of E+ and direct them to separate processors. I think there was some discussion E+ in the Honeybee forum so you might get a better answer from there on this issue.
Clustering computers and GPU based calculations: One way of implementing the kind of parallelizing that you are referring to, ie. utilizing unused desktops is to cluster computers. Penn State has a dedicated, text-only, Linux-based cluster system which I have been tinkering with for the past year or so. A single node of this cluster has 60 parallel cores and close to 300GB or RAM. Each node, in turn, was created by linking a bunch of computers together. Implementing such a cluster would require an active participation from IT systems admins in your firm. Another option is to use Accelerad for Radiance which parallelizes Radiance . Radiance doesn’t have a limitation regarding the number of cores you could use. I think the 8 processors that you mentioned is more a function of the currently available desktop computer configurations than Radiance’s ability to handle more processors(i7 for example, has 8 processors). In the past, I have run parallel renderings with up to 20 processors. Radiance code is optimized to run on Linux systems so the performance on Windows systems is likely to be somewhat slower.
Finally, unless there is a pre-existing platform to handle such parallel processing, some scripting effort would be required to direct calculation files outwards into different systems/processors and then fetch and consolidate results from those calculations into a single location and then visualize those results on an interface like Mostapha’s Design Explorer.
Sarith…
ing ways to leverage simulation results from ladybug and inform design of building envelops with benefits that can be modeled. Given 20 percent of the cost of a project typically goes to the facades, and maybe a half of that goes to the openings, there is a good enough reason to question how to materialize that 10 percent, which can result in 10-30 percent difference in total energy comsumption.
I think ideally radiation analysis, natural ventilation and daylight analysis on floors should all inform opening sizes and placements, as well as the building sections at large. However natural ventilation seems to be the most complicated one because it couples airflow and thermo dynamics. I have a definition setup so that I can batch simulations for radiation analysis and daylight analysis, but natural ventilation is the missing link. So for what I am doing now I will select a handful of design that seem to work the best based on the two available analysis and convert all the geometry into CAD files so that I can run them in an evaluation copy of autodesk simulation CFD. So for now I can do this in 2 stages.
But for the future, given the possibility of actually have that as a part of grasshopper feature, which would be lovely, I want to understand the science behind it and share some links.
(http://www.wbdg.org/resources/naturalventilation.php) In this link the author outlines quite a few general principles and variables to consider for natural ventilated buildings.
For example, how stack effect works.
Qstack = Cd*A*[2gh(Ti-To)/Ti]^1/2, where
Qstack = volume of ventilation rate (m³/s)Cd = 0.65, a discharge coefficient.A = free area of inlet opening (m²), which equals area of outlet opening.g =9.8 (m/s²). the acceleration due to gravityh = vertical distance between inlet and outlet midpoints (m)Ti = average temperature of indoor air (K), note that 27°C = 300 K.To = average temperature of outdoor air (K)
The thing about natural ventilation is that not only the sizes and positioning of openings of the facade facing predominant wind matter, but also the openings on the other side matter. The vertical distance between the inlets and outlets also need to be taken into account. The author suggests that naturally ventilated buildings should be no wider than 45 feet.
and in this pdf presentation it discusses CFD for natural ventilation and illustrates why it is not easy
http://isites.harvard.edu/fs/docs/icb.topic882838.files/L17.6205Airflow-Modeling_Ibarra.pdf
and in this pdf briefly outlines the approach taken by designbuilder
http://isites.harvard.edu/fs/docs/icb.topic472869.files/DesignBuilder%20Simulation%20Training_HSD.pdf
Lastly a wide spectrum of environmental analysis works by e3lab
http://www.e3lab.org/research
http://www.e3lab.org/green-buildings
If I make progress on a way to tie the three analysis together (radiation, daylight and natural ventilation), I wont forget to post it on this thread.
Thanks.…