his on the programming forum I'm guessing you're looking for a VB or C# approach to this?
Here are two algorithms (pseudo code, very similar) which will simulate a droplet of water on a surface (ignoring momentum, surface tension, surface angle, collisions with other drops etc.)
Algorithm one, easy implementation, slows down on horizontalish areas:
1) Pick a point somewhere on the surface. How you get to this level is your problem.
2) Lower the point by a certain fixed amount along the z-axis. Say, 0.1 units.
3) Project the lowered point back onto the BRep using a ClosestPoint function.
4) If the newly projected point is very similar to the input point, abort, otherwise, repeat step 2.
Algorithm two, more difficult, better control over step size:
1) Pick a point somewhere on the surface. How you get to this level is your problem.
2) Find the normal vector at this point.
3) If the normal vector is (nearly) straight up, abort.
4) Find the CrossProduct between the normal vector and the straight-up vector.
5) Rotate the normal vector 90 degrees around this cross-product.
6) Scale the rotated vector so it becomes the length of your sampling accuracy.
7) Move the point along the vector and pull it back onto the surface (should be a short distance if your step-size is small)
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 12:20pm on November 23, 2009
google's data (please correct if I'm wrong):
"SRTM1 data is sampled at one arcsecond (about 30 meters) and SRTM3 data is sampled at three arcseconds (about 90 meters). The higher resolution SRTM1 data is available for most of the US and the lower res SRTM3 data is available for most of the world."
The 3x3 stitching definition above is done in Rhino 4 but it doesn't actually "stitch together" or merge the surfaces into one. I had to do it manually in Rhino with the merge surfaces command. Which I think does a better job than grasshopper.
Also I think the calculations within it (distance of one degree change in lat/lng) won't be accurate enough (or high enough in resolution) even though they are correct so I cannot guarantee the 3x3 pieces are perfectly neighbouring sets of data (they might contain very very tiny strips of overlapped/missed topography data). However this error is really insignificant next to the limited resolution of the generated topography so it is neglectable if you're not a perfectionist like me.
Edit: For bigger areas Elk is much easier, but for smaller areas where you want to specify the area size Xiaoming's component is more convenient I think.…
s set up. All the goals in Kangaroo have indices identifying which of the points in the system they act on.
Assigning these indices automatically and still allowing inputs to change during simulation requires some tricks to work around the acyclic directed nature of Grasshopper.
In remeshing the indexing and even number of points changes which greatly complicates things if you want to also have goals assigned to certain edges/points.
Last time I spent serious time on this though was before the K2 library, so maybe time to revisit soon. I think it would probably over complicate things trying to accommodate this remeshing directly within the main Kangaroo solver component, but there could be a dedicated membranes tool (though I know you also want me to prioritize documenting the existing tools!).
Stepping back for a moment though - it is usually possible to separate the remeshing and relaxation into separate steps. Membrane relaxation generally needs well shaped triangles (no angles over 90), and remeshing can give you this. Of course the triangles change shape during relaxation, but if the unrelaxed geometry is not too dramatically different from the end result, and you use tangential smoothing to keep vertices from drifting, they can stay well shaped throughout. For bigger changes in geometry you could also remesh-relax-remesh-relax.…
Added by Daniel Piker at 10:29am on January 13, 2016
nds except only using CreateHBSrfs which can be unstable for me with some geometry (GH crashes).
If you want proof of the rotation not taking place using MSH2RAD, please look in the Daysim*.rad file that gets created when performing a Daysim simulation.
See example below. The same polygon is processed via the CreateHBSrs component and via the MSH2RAD component. The polygon gets rotated 90 degrees using CreateHBSrs but unfortunately not with MSH2RAD:
_______________
##GENERATED BY HONEYBEE
OPAQUE polygon b69a317a402d42c1994f410463cd_00 0 12 -15.824400 -5.615800 0.000000 -15.824400 -44.175400 0.000000 -15.824400 -44.175400 28.363100 -15.824400 -5.615800 28.363100
# SOURCE FILE: c:\ladybug\000000_TEST\SURR\MSH2RADFiles\SURR.rad
## c:\radiance\bin\\obj2rad -f c:\ladybug\000000_TEST\SURR\MSH2RADFiles\SURR.obj## OBJ file written by TurtlePyMesh
OPAQUE polygon object_1.10 0 12 44.175400 -15.824400 28.363100 5.615820 -15.824400 28.363100 5.615820 -15.824400 0.000000 44.175400 -15.824400 0.000000
_______________
All the best
-M…
what i want.
My intention is that the Randomly selected brick be rotated 90 degrees so that header face is proud of the actual wall face rather than stretcher face.
I can easily rotate the selected bricks and then protrude them in the desired direction. However, if i rotate the brick a gap is created on either side of rotated brick (refer sketch 1). I want to set a parameter that CLOSES THAT GAP, so that the wall remains watertight (refer sketch 2).
Brick size used 230mm (L) x 76mm(W) x 70mm(H).
Attached are
1) 1-Sketch: Explaining my conundrum
2) 2-Sketch: Explaining what i want to achieve
3) 3-Perspective: Baked Geometry of what i have achieved so far
Please feel free to ask for my GH definition if required.
I'm an absolute dummy in VB scripting.
So insight to solve my conundrum will be highly appreciated.
Cheers
…
if i select one by one and it shows
and also, select different amount of curves shows different angles[same curve]but the most important thing is all of them are wrong angles,
if i draw some 90 degree curve, the answer is right.
thank guys…
doing this with the current tools or a bit of scripting since the Flickr API allows you to make requests in a REST format, but utilizing the Flickr.net API library makes it much simpler.
First and foremost, you need a Flickr API key...do you have one of those?
A great way to get to know the Flickr API is with the API Explorer. Here is a link to the page for the flickr.photos.search method explorer: http://www.flickr.com/services/api/explore/flickr.photos.search
The cool thing about this page is that it generates the REST Http call towards the bottom. So, here is what I did:
1. Grab the coordinates of the bounding box per Flickr API request:
bbox (Optional)
A comma-delimited list of 4 values defining the Bounding Box of the area that will be searched. The 4 values represent the bottom-left corner of the box and the top-right corner, minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified. Unlike standard photo queries, geo (or bounding box) queries will only return 250 results per page. Geo queries require some sort of limiting agent in order to prevent the database from crying. This is basically like the check against "parameterless searches" for queries without a geo component. A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).
So, I went to Google Earth, picked a city (London, UK) and dropped two pins:
This gave me two locations, which I can put into the Explorer Page next to the bbox option. Here is what I put for these two points: -0.155941,51.496768,-0.116783,51.511431
2. Check has_geo
3. In extras, type in geo
4. Make the call!
You will see a list of responses in an XML format, these responses will be from the first page. Geolocated photos are limited to 250 / page, so you will have to grab them page by page.
If you want to add more options (minimum upload date, maximum upload date, etc) you can do this as well)
The best is at the bottom, you get the full http call for this: http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=ffd44f601393a46e86aa3a5f8a013360&bbox=-0.155941%2C51.496768%2C-0.116783%2C51.511431&has_geo=&extras=geo&format=rest&api_sig=b42330e5d1523bd5fe60c2ad43acde99
Notice this call has some other api key, you should eventually replace this with your own.
You could copy and paste this into a browser and you will get the results with the latitude and longitude:
So this is really what you need to know to do this through GH. Since gHowl has an XML parser component that can access files on the web, you should be able to use the same http call into this component.
Eventually, we get a response, and we need to grab the lat and lon data. With gHowl we can map these to xyz coordinates, and generate the heatmap...this is just a linear mapping:
Attached are both the Rhino file and the Grasshopper file, as well as the image underlay.
I am working on a series of components that makes this more straightforward, but for now, this should get you started.
…
cy of design communication and the control of information-flow are as important as the creativity of ideas. In response to the concurrent digital evolution emerging in the architectural industry world-wide, the Faculty of Architecture at The University of Hong Kong will host a two week intensive summer program named Digital Practice.Led by professors from The University of Hong Kong, as well as invited practitioners with expertise in practice of cutting edge digital techniques, the program offers participants opportunities to experience applications of computational tools during different stages of an architectural project, i.e. concept design, form finding and optimization, delivery, management and communication of design information under the team-based working environment. By learning advanced computational techniques through case studies in the context of Hong Kong, participants are expected to go beyond the conventional perception of technology, considering users and tools as a feedback-based entity instead of a dichotomy. The program, which is taught in English, includes a series of evening lectures related delivered by teaching staff and invited local architects.對於高品質的建築專案,創意之外,專案過程中高效的設計資訊管理和交流成為項目設計深化和實施必不可少的環節。今天,數字化技術不但改變了建築師的繪圖工具,影響了設計的過程,而且提供了工程建造和管理實施的更有效、更高效的手段。針對建築的數位化演進,香港大學建築學院將於2011年暑假期間,在香港大學建築學院舉辦“數位化實踐”國際研習班。在香港大學建築學院教授及有著相關豐富經驗的外聘實踐建築師的指導下,學員將有機會體驗在專案的不同階段(如概念設計、設計形式的生成、優化,設計資訊的管理和交流),如何有效地應用各種運算智慧化技術(從設計的數位化生成和建築資訊類比到物理模型),提升設計實施的品質,增加設計團隊對於方案的控制。我們將挑戰對於“技術”的傳統認知,即相對於使用者它不僅是工具,更是與使用者互動的媒介,二者形成一個有機的合體。研習班期間會安排系列講座,展現數位化技術在實踐工程中的廣泛應用。…
e design intent, but this is what Inventor is good at. The way it packages bits of 'scripted' components into 'little models' that can be stored and re-assembled is central to MCAD working. The big speed/usability advantage for the user that apps like Inventor provide is: All the defining, handling, assembling/gluing to the adjacent components is done as part of its 'main loop' with all the hooks that can cater to user interaction, ie traditional modeling. I guess one example of this is how Revit handles the placing of Adptive Components. AC's (and GC's GFT's) is pretty much a copy of Catia PowerCopies (which are probably a copy of something else). When placed, the AC's input points are transferred one by one to the cursor for the user to interactively place them. When copied, it tries to keep the same inputs, while changing its position/parameters. This saves a lot of time/nerves.
Catia, OTOH, is still thinking in terms of scripting and looks for matching property names, or uses a script to match strings, that nearly match. Sure, sometimes, this is unavoidable, but I think that there is a lot of room for incorporating a more traditional 'event-based' interface or 'wrapper' around the scripted components.So much is scripted in GH, maybe it should also be possible to script/define/constrain/assist the placement/gluing of the results? An example of this is how Modo's Toolpipe works. The Toolpipe is a simple tool to record the active selection, snap/alignment/working plane, tool settings for re-use. I could see the user benefitting if the GH component was aware of the app's 'state' when placing/assembling components.
Also, a lot of simple things could be 'modeled' first and translated into scripted form if GH could read the active workplane, snap settings etc. Draw first, convert to hand-scripted script later?Columns: Looking at your description, the vertical elements were modeled in Rhino, and referenced in GH? 5hrs to get some points on the lines? And using Excel as the design table? I think this could be 'drawn' and constrained in Inventor in a lot less time. I know the GH model would have a lot of flexibility, but in this case, what can you do with it that wasn't provided by an Inventor model? The other thing that MCAD apps like Inventor have, is the 'structured' interface that offers up all that setting out information like the coordinate systems, work planes, parameters etc in a concise fashion in the 'history tree'. This will translate into user speed. GH's canvas is a bit more freeform. I suppose the info is all there and linked, so a bit of re-jigging is easy. Also, see how T-Flex can even embed sliders and other parameter input boxes into the model itself. Pretty handy/fast to understand, which also means more speed.Would love to understand what you did by sketching.Starting point: I think we are talking across purposes. AFAIK, the solving sequence of GH's scripted components is fixed. It won't do circular dependencies... without a fight. The inter-component dependencies not 'managed' like constraints solvers do for MCAD apps.
With a manager, If one of the beams is connected to the column, changes in either component would trigger changes in the other to preserve the connection, regardless of the creation history. In GH, the dependencies are fixed, and the connection points would probably need to be defined independently, and placed 'upstream' of both elements. This makes editing laborious... but DAG processing is a lot quicker than constraints solving. Switching direction seems to be possible in the animation world. Maya etc have IK/FK switching, which seems to be able to reverse the solving direction on demand. Not sure how or whether the rig is scripted.…