user to understand. RhinoScript is a generally more straightforward and easy to use. You can think of it as a translation of RhinoCommon so that you don't have to write all the technical stuff.
In your first line you've said "import rhinoscriptsyntax as rs". To see the methods you can call from this library you can go to the help menu and choose 'Help for Rhinoscript'. It will show you a searchable window of all the the options you have. This is much easier for new users to learn than looking at the RhinoCommon SDK.
If you search the help file for 'BoundingBox' you'll get the screen capture below:
At the bottom you can see an example of how to use it. In your case you would replace the following lines:
2/ boxA=brepA.GetBoundingBox((0,0,0,)) --> boxA = rs.BoundingBox(brepA)
3/ boxB=brepB.GetBoundingBox((0,0,0,)) --> boxB = rs.BoundingBox(brepB)
The script you have written uses elements of both RhinoScript and RhinoCommonSDK. I would suggest you might start just using RhinoScript. See below, I have re-written the first 8 lines of your script using just RhinoScript:
import rhinoscriptsyntax as rs
#Get BoundingBox from breps.BoundingBoxA = rs.BoundingBox(brepA) #Returns list of eight corner points.BoundingBoxB = rs.BoundingBox(brepB)
#Get centre point of RhinoScript BoundingBox (which is a list of eight points).boxA = rs.AddBox(BoundingBoxA) #Generate box from corner pointsptA = rs.SurfaceVolumeCentroid(boxA) #Get Volumetric Centroid of boxboxB = rs.AddBox(BoundingBoxB) ptB = rs.SurfaceVolumeCentroid(boxB)
For reference the following will achieve the same thing using RhinoCommon, fewer lines, but more technical. There are a few other quirks as well, for example you have to explictly tell the python component what kind of object 'brepA' is. See below for example of same script in RhinoCommon:
import Rhino as rh
centerPtA = brepA.GetBoundingBox(rh.Geometry.Plane.WorldXY).CentercenterPtB = brepB.GetBoundingBox(rh.Geometry.Plane.WorldXY).Center
I'm not sure what you are trying to achieve overall and your loop doesn't make a lot of sense to me but I hope that clarifies some of the differences between the two libraries you can use.
Regards,
M…
out of the practice walls.
Anyway get a hint: Hard top-bottom clustering is used (process is STOPPED to the fist level of clustering for clarity: meaning flat clustering):
Cubes are abstarct (spaces, say: "rooms") centroids:
Then clusters are made; Prox K-Means is used here (other methods also available). Cyans are the abstract representation of the flat clustering (for clarity). The decision upon the number of clusters is quite complex and is based on criteria that are not used here (adjacency matrices et all):
Then this:
And finally that:
Obviously the real thing works recursively (kinda like a fractal algo) on the clusters and stops if the predefined number of nodes is reached (say 2 or 3). Then the "flat" red connector shown connects actually (bottom to top) child to child AND child to parent clusters etc etc.
BTW: GH has a component (called quadTree) that - I guess - works "kinda" like a K-Means clustering algo but the fact that GH is a-cyclic by nature ... means that you should use Anemone - if the above are attempted via the component way (not my way anyway).
more soon
…
by anything else (like walls, ceiling etc.). So, specifying ambient parameters in the simulation does not serve any purpose.
2. Get rid of source subdivision by setting -ds to 0. This will make Radiance assume that your source is a point source ( which seems to be a valid assumption in your case).
3. Finally, you can cut down the time even more by magnifying the candela value of a single light fixture instead of using the 9 fixtures that you have right now. This is a hack, but will work because that way, even the deterministic part of the calculation will feature only 1 variable instead of 9. You can make this work by increasing the candelaMultiplier of that one source by 9.
I have attached a gh file (which shows the parameters options) as an example. The points I have made above will mostly work for electric lighting only. Daylighting needs different kind of optimization because we are dealing with parallel rays from the sun and not a source with directional distribution like electric lights.
The image below is base-case. Takes the maximum time, using default settings.
This one is with direct-only calculations.
This is the case where a single fixture was used and the candela multiplier was used to magnify it's brightness 9 times. Took the least amount of time....but as you can there is a perceptible difference between this one and the base-case.
…
he process. The last one is there because fixing it would cause another problem, which we feel is more serious. Solutions may well be forthcoming in the future though.
1. Grasshopper curves and points are drawn more towards the camera than they really are. This is a conscious decision. Often Rhino geometry and Grasshopper geometry exist in the same place. If we would draw the Grasshopper preview in place, then there's no telling whether you'd see the Rhino curve or the Grasshopper curve. We feel it's important that you always see the Grasshopper curve on top. This is why we draw all curves and points slightly towards the camera. However we don't do this for meshes. This results in something akin to the image below. The eye represents the location of the viewport camera, the shaded box represents the actual location of the geometry and all the thick black lines represent the edges of the geometry moved towards the camera. As you can see, the red lines will be visible, even though they should be behind the shaded box. This effect can get very strong when the camera is close to some geometry relative to the size of the boundingbox of all geometry.
2. Wires behind the camera are sometimes visible. This is a bug I don't know how to solve. We'll get around to it eventually. When an object is behind the camera the display transform sometimes makes it visible in front of the camera in some weird inverted perspective mode.
3. Meshes are not z-sorted prior to display. This means that the order in which they are drawn is not back-to-front, but fairly arbitrary. This means that a transparent mesh may appear to punch a hole in the mesh behind it. If this is annoying you to no end, you can use Ctrl+F on the Grasshopper components that contain the meshes that are punching holes and then press F5 to recompute. The draw order should now be different. Of course sometimes it will only 'fix' it for a specific camera angle.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
that aren't relevant anymore or if there are any I missed please let me know. Maybe we can get a list like this in a better place as well.
Thank you.
Right Mouse - When wiring, plugs wire into multiple inputs.Shift+Click - Pick component aggregate.Shift+Clicking - Place component aggregate.Alt+Left - Click Split canvas tool.Ctrl+Q - Preview toggle.Ctrl+E - Enable toggle.Ctrl+Left - Navigate upstream.Ctrl+Right - Navigate downstream.Ctrl+M - Mesh Edge display toggle.Ctrl+1 - No previewCtrl+2 - Wireframe preview.Ctrl+3 - ShadedCtrl+Alt+Shift+Click - Save image of canvas.Ctrl+Alt and Shift+Ctrl+Alt - Highlights components on the canvas and component palette.Ctrl+Shift - Rewire component input/output.Double Click - Find/SearchAlt+Drag - Copy component on canvas.Ctrl+Tab - Document cycling.Ctrl+Shift+P - PreferencesCtrl+N - New fileCtrl+O - Open fileCtrl+S - Save file.Ctrl+Shift+S - Save as.Ctrl+Alt+S - Save backup.Ctrl+W - Close open document.Ctrl+Z - Undo copy.Ctrl+Y - RedoCtrl+X - CutCtrl+C - CopyCtrl+P - PasteCtrl+Alt+V - Paste in placeCtrl+Shift+V - Paste in centerCtrl+A - Select allCtrl+D - DeselectCtrl+Shift+I - Invert SelectionCtrl+Shift+A - Grow SelectionCtrl+Shift+Left Arrow - Grow UpstreamCtrl+Shift+Right Arrow - Grow DownstreamCtrl+Left Arrow - Shift upstreamCtrl+Right Arrow - Shift downstreamCtrl+G - Group selectionF3 - FindF4 - CreateF5 - RecomputeCtrl+B - Send to backCtrl+F - Bring to frontCtrl+Shift+B - Move backwardsCtrl+Shift+F - Move forwardsInsert - Bake selectedCtrl+Q - Toggle previewCtrl+E - Toggle enabled selected
…
thing about how to use Grasshopper to break up the unit modules as parameterization. Is there any Grasshopper Master could help me? The result i want is looks cool and easy to build.
Reference:
https://vimeo.com/98518748 Video link
http://www.designboom.com/architecture/robotically-fabricated-landesgartenschau-exhibition-hall-06-25-2014/
The question:
1,In the video,How to process parametric form?(kangaroo?)
2,After make all the things on the round surface, how to change the Tangency circle into flat polygon?
3, at last ,how to link every unit module?
…
faces (maked in RED, GREEB and BLUE) are shared by two zones. The small zones (in GRAY) attached to the big hall are not taken into consideration in the research.
In this case, inclined surfaces are included, as the simplification of the grandstand for spectators.
As the image shows, I have some questions:
1 - What should be the correct “surface type” of the inclined surfaces? floor?wall? or else?
(Actually, I have tried both floors and walls, but warnings shown below were received. The weird thing is that they had been assigned as floors and walls respectively having checked by the "decomposeByType" component.
2 - Can I ignore these warnings? if not, why and how can I deal with it - how to assign inclined surfaces as proper "type"?
3 - How can I get "interiorWalls"?
Although the small zones attached to the large hall are not considered in the research, I still need to assigned the shared walls as interior Walls (and set EPBC as "Adiabatic"), right? But, having checked by the "decomposeByType" component regrading walls, all I got were "walls" instead of "interiorWalls". Then, how can I get "interiorWalls" as I want?
Btw, due to the complexity of the geometry (e.g. containing inclined surfaces), I formed the thermal zone of the sports hall surface by surface using the "createHBSrfs" component as shown in above images. Do you think if it is a proper way in my case?
Any help will be much appreciated!
Ding
…
vided with U and V into line segments (i'd prefer to use that method instead of rectangular grid). These segments in U direction would then be rotated around V lines segments with min value of 0 and max value of 90 degrees, according to attractor (i'd like it to be image sampler in the end but for now im trying with point/multiple points). These lines would then be lofted
I post the definition below
Here are my problems (i marked them in definition):
1)i managed to get U direction line in every second row, i dont know how to get the lines between the rest (i tried shifting list, but didnt manage to get the right result).
2)Harder part - id like to measure distance between attractor point and a bottom point of lines. Than i would like to transfer it to rotation this way (distances used just as example): distance 0-20 - angle of rotation 0degrees; distance 20-30 - angle of rotation 1-90 degrees. I have no idea how to transfer it into definition. I also have problems remapping numbers to 0-90 value.
3)I'd like to do that later but i'd also like to use black-white image sampler as white - 0 angle of rotation, black - 90 degrees. I never experimented with image sampler and would be grateful for some advice how transfer colors into degree values.
I politely ask you to help me (especially with first two points i mentioned). I'm not asking for a ready definition - I would like to understand where my mistakes are.
Below i post a picture of something simmilar (although im trying to rotate it by edge, not by center line)
Pardon my english, thank you for your time and help.
Enjoy your weekend.…
ake this example. However, I still have minor bugs.
Whenever I save and reopen the file, I could not unclick the drop down menu to fold the additional input. See pictures below.
1. When I open the gh file.
2. The input is not folded.
3. When I click again, additional input is now duplicated with itself.
Does anyone know how to fix this situation? Any comment will be appreciated.
I attach the C# file below. Thanks! …