thought that architect's love for drawing comes from the necessity of translate abstract ideas into built 3D reality, and the technology behind that 2D representation has not evolve so much until some decades ago. Our teachers come from that times: times when computers try to find their place in the reality representation world. If you try to imagine that people that have always drawn with pencils adapting to this new tools...some become fan of new methods, other just keep the old fashion workflow (like Andrew said in the article, Schumacher VS Graves)
We've bear (at least Andrew and me :P) in 80's with first video games, computers (I still remember my old x286 with 1Mb RAM and 20Mb of HD and that MS-DOS interface)...New technology was natural for us...But there is a big difference between traditional drawing and new computer aided tools: the learning curve. To draw you only need to take a pen and put over a paper (that interface is understood by children easily) , but traditional computational tools (new touch interfaces are out of this group) are based in a complex logic and environment that is not easy to understand for some people.
In the workshops I'm teaching in, I try to put all that tools (new and old one) in my students hands and motivate them to mix and use them together (Andrew knows a little bit about that :P). Why not to make a lines sketch with GH and then print it and render with some markers?; the last step could be scan the result and enhance it in Photoshop adding textures, vegetation, some background...There are no rules, only a bunch of tools to explore and use to develop your ideas, evolve and finally represent them.
I bet to the touch interfaces (with some augmented reality sauce) like that one that will be able to blend both worlds, analog and digital, offering that fluidity and natural interaction that Grave miss in digital tools. And our generation attached to this "not natural" interfaces will need to change its mind and adapt to that new and amazing interface that our children will love.
Only to complete:
<iframe width="560" height="315" src="http://www.youtube.com/embed/aXV-yaFmQNk" frameborder="0" allowfullscreen></iframe>…
Added by Ángel Linares at 5:40pm on September 10, 2012
e volume. The yellow line above.
This volume, green on the above image
So with this there was an intersection with the Brep volume of the chair and the lattice.
After that I used cocoon. Here the parameters I used for the Brep and curve. So The Brep was offsetted.
The model is 80 unit height and cell size is 0.2 so roughly there are 400 divisions in Z. If cubic it will give 6.4 millions of cells. To my point of view it is important to choose well the cell size in order to have not hundred of million of cells. Here 6 millions was usable. The general thing with Cocoon is alwas to test it on small objects first.
A close view of mesh. Edge length is 0.1 unit. There are 6 millions of triangles.
…
50 and reduced the 'cell size' slider to 0.5. When the 'Azimuth' angle is changed to 180 +- 90 (dawn or dusk), the points are widely dispersed, reducing the density and increasing the number of cells in the "sparse grid". Under these conditions, the number of cells was ~2000 and the Profiler time for 'Boundary' went up to a full minute or more each time 'Altitude' or 'Azimuth' was changed.
So I created this code to benchmark some alternatives and found two interesting things:
'Boundary' surface performance (v.1) is not linear. As the number of surfaces goes from 1000 to 2000, the time per surface goes up dramatically.
I tried three alternatives for creating a rectangular surface at a given point that are all substantially faster: v.2, v.3 and v.4. For 2000 points, v.4 is 150 times faster than v.1 !!!
Performance of v.2, v.3 and v.4 are similar and all scale up very well. To benchmark beyond 2000 points, I recommend disabling the VERY SLOW v.1. At 5000 points the 'Pop2D' component takes ~11.3 seconds but v.3 and v.4 take less than one second to generate 5000 surfaces!
See boundary_2015Nov19a.gh attached.
So I replaced the 'Rectangle' and 'Boundary' components in my sun reflection model with v.4 in focus_2015Nov19b.gh (also attached) and the performance is amazing.
I'm sure someone has mentioned this performance issue with 'Boundary' on the forum before but as with many things, I didn't realize what a major obstacle it can be until I discovered this for myself.…
Added by Joseph Oster at 9:16pm on November 19, 2015
grout lines, a tile surface and tile perimeter poly line). I then use that as a Mesh (from Rhino) in the second definition.
2. I can tile out the mesh surface and rotate all the tiles in 90 deg. increments.
To get what I wanted. I took the Mesh and have copied it in series to make a grid. I can then control the dimensions of the grid. X and Y extents. I can also rotate the tiles around their centers.
The spacing of the grid is set from an edge curve of the tile (or mesh). This sets the size of the squares in the grid themselves.
See definition, images and Rhino 4 File, to give the definitions a shot. I have labeled how to use them.
My question -- how can I randomly rotate squares in my grid? I would like the deg of rotation to be random and also which tiles they are.
Also how might I rotate (every other tile) for example? So that I can control the pattern more?
Thoughts?
Thanks!
…
ror when it comes to points on edges of the surface.I guess it is because normal vectors at a few of points are invalid. After all, because of these invalid points, an error message comes out which is saying " Runtime error (PythonException) : Unable to add polyline to document " and it results in no output. Please give me some help if you know how to handle this problem. I post a code below.Thanks in advance.
---------------------------------------------------------------------------------------------
import Rhinoimport rhinoscriptsyntax as rsimport mathimport ghpythonlib.components as gh
output_crvs = []
for pt1 in input_pt :output_pts = []newPt = pt1output_pts.append(newPt)
while len(output_pts) <= 100: newPt = outputpoint(base_srf, newPt, distance_factor) output_pts.append(newPt)
output_crv = rs.AddPolyline(output_pts)output_crvs.append(output_crv)A = output_crvs
def outputpoint(base_srf, input_pt, distance_factor):centre_point = rs.AddPoint(0,0,0)height_point = rs.AddPoint(0,0,10)
zaxis = rs.VectorAdd(centre_point, height_point)
cp_pt = rs.SurfaceClosestPoint(base_srf, input_pt)normal_vector = rs.SurfaceNormal(base_srf, cp_pt)drain_vector = rs.VectorCrossProduct(normal_vector, zaxis)
dvector2 = rs.VectorUnitize(drain_vector)dvector3 = rs.VectorRotate(dvector2, 90, normal_vector)
mpt = gh.DeconstructVector(distance_factor*dvector3)moved_pt = rs.PointAdd(input_pt, mpt)moved_uv = rs.SurfaceClosestPoint(base_srf, moved_pt)output_pt = rs.EvaluateSurface(base_srf, moved_uv[0], moved_uv[1])
return output_pt…
g from a list of 12 items I would find all the combinations taking just 4 at time.
I'd use a Stream gate that takes the indexes of the items and pass them to a list item in order to select just the items of the combination. Doing so I can choose a single combination of index at time to pass to the list item.
In this moment all the data come out from the first gate, all the others are empty.
If I pass these index to the list item it gives me an error (probably because of the data structure).
*long version*
I start from a list of 12 segments, all of them with the starting point in common and the ending point distributed regularly in the space. It's a quite simple starting point.
What I'm trying to achieve is to find all the possible spatial configurations made of 2, 3, 4 segments. I started with 2 segments so I've 12^2=144 possible configurations but just 4 different configurations that can intuitivelly be recognized (60°, 90°, 120°, 180°).
Doing the same with 3 segments generates 12^3=1728 configurations and I don't know how many different ones. With 4 segments I've got 12^4=20736 possible configurations.
As you can imagine many configurations are identical but just with a different orientation so at the end I'll have to parse geometrically the output to delete duplicates (I'll address this later on).
Please could you help me to figure out how to mix these segments in different configurations?
Thank you in advance.…
per bake commands to bake the connected geometry with the corresponding materials.
mxDiff is a simple diffuse material. Only reflectance color for 0° and 90° are exposed.
mxEmit is a basic emitter material. You can set light color, power and efficiacy of the emitter.
mxBasic is the most complex material for now. You can set all the properties of a single layer material including. Use this for transparent materials.
mList is your way if you don't want to create your own materials. This component returns a list of all the materials on the Maxwell scene manager. Make sure this is evaluated after you add your own materials if you want to see them in the list.…
nowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities. The Call for Clusters is now open to proposals which respond in innovative ways to this year's challenge.
Deadline: September 19 2011
More information can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=129&Itemid=146
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information.Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities.sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org…
mmon.sdk ,but i herad its used in rhino5.
or example: the book grasshopper primer second edition ,page 98
i dont know what is the "doc.absolutetolerance" and where i can find about it....i dont kow it should be a class or a fuction,i tried to search the rhino4. net sdk,i cant find it ....maybe its my searching problem.
but according to the grasshopper primer, i indeed know many kind of Variables,many functions,basic structure, loops, and conditions,and what is onutil.xxxx and rhutil.xxxx.but i found all this imformation is not helpful enough to me when reading the examples downloaded from many disscussions.when i found a new variable or new funcion,i dont know where i can find the introduction about them,such as the upper coding:"doc.absolutetolerance".i tried to use the auto complete such as
dim xxxx as oncurve
xxxx. to find the class oncurve's funtions and variables ,but its too uneffcient.
-----------------------------------------------
And,i dont know the difference between the components vb script and dotnet vb script....
because i found when i type onutil. the auto complete has noting appear...and the variables declaring is not the same. in vb script dim xxxx as curve but in dotnet vb script its dim xxxx as oncurve,which is the same as the grasshopper primer teached me...but i guess.... the vb script component is just like the rhinoscript(not the same),and the dotnet vb script is more powerful than it. am i right?
------------------------------------------------
at last i dont know these.
Imports System Imports System.IO Imports System.Xml Imports System.Data Imports System.Drawing Imports System.Reflection Imports System.Collections Imports System.Windows.Forms Imports Microsoft.VisualBasic Imports System.Collections.Generic Imports System.Runtime.InteropServices
when i search google about them,the introduction about them is too professinal for me to understand......i just want to know what i can do by using them ...
-------------------------
sorry for disturbing you so much!!!
best regards!
yours truly
YUAN.T
…
used of 180 being for the northern hemisphere and 0 for the southern hemisphere.For the optimal tilt, to my knowledge, they are mostly based on correcting location's latitude through a single formula.TOF component is more sophisticated. It essentially replicates the Solmetric's Annual Insolation Lookup tool.What it does is that it creates a grid of points. Each point represents the calculated annual insolation on the surface (PV module, SWH collector, facade, any kind of surface) for a single tilt and azimuth angle.Each point is then elevated according to the annual insolation values. The mesh is created from that grid of points. The portion of the mesh which is the highest, represents the optimal tilt and azimuth angles. So the higher your "precision_" input is, the more points in a mesh you'll have - thus the more precise final optimal tilt and azimuth will be.For the diffuse component of the annual incident solar radiation for each point the Perez 1990 modified model is used. Direct is from classical cosine law, and Ground reflected component from Liu and Jordan (1963).So TOF component calculates the optimal tilt and azimuth based on annual incident solar radiation, not AC energy....…