u can still find some wonky behaviour in GH related to datatrees. My experience is that new users quite quickly get the hang of it once they learn that a tree is in fact not a tree but in the first place set of lists, where the path shows how the pieces of data used to be grouped.
Branch Count checking A component has multiple tree inputs, but has different amount of branches, each having branch count > 2. (While I understand the logic of combining multiple trees, I've not once encounted once that combining a component with e.g. an input of 2 branches and an input of 4 branches to give any kind of sensible output.
Desired behaviour: If a component has branches (each being > 2 path count), the component should throw a warning. ("Strict branches behaviour?). For example: take an offset component, with 6 branches of curves and 5 branches of offsets. It is extremely likely that this is the result of an error earlier in the definition. This works however without a problem - the last branch is repeated again, and it's later on quite hard to discover something went wrong.
Checking branch Count The most important numeric is the amount of branches, and the amount of items in the tree. It's desired that the hovers show the amount of data and the amount of branches.
Desired behaviour
Trees with paths of different rank Trees that contain {0;0} and {0} and {0;0;1} is usually a sign of trouble of not well merged trees, faulty C# components, or just nasty coding habits.
Trim as undo graft instead of flatten Having the trim in the context menu would provide an easy way to undo a graft. Right now the easiest way for many people is to flatten it, and then start all over again - while just getting rid of the last index keeps the underlying history and makes it easier to write reuseable pieces of code when you prepend datatrees to it.
Component to get branch by index, not by path Would be great. Suppose you have a grid of points, grouped by row. It would help to show: "look, this is in the first path, it's called {0;0;1}, it's got 10 points, these points are the first row".
Analogue to using list item to show what is the first point, second point, and so on.
Semantic path names (maybe far fetched) But what if we can add a short name of each method that was executed to the path list, so it can show:
{Slider 0; Series 0; Point 0}{Slider 0; Series 0; Point 1}
{Slider 0; Series 0; Point 2}
{Slider 0; Series 0; Point 3}
{Slider 0; Series 1; Point 0}
{Slider 0; Series 1; Point 1}
{Slider 0; Series 1; Point 2}
{Slider 0; Series 1; Point 3}
Make the input/data matching inside components explicit Can we make it even more obvious that a component is not a black box that's executed once, but in fact an iteration machine that tries to make sense of the inputs that's fed to this box?
Show data combination. How data input A relates to data input B and data input C, is currently very implict and is just plain hard to learn., and required the ability to be able to relate the output back to the input. If we can textually or even graphically show what data matching occured inside a component, it would greatly help the understanding (and debugging) of "what's going on here in this component"
A verbose explanation of the data matching in component A
Iteration one: - Geometry: We take the data item from Branch 0, Position 0: (Point 0,0,0) - Motion: We take the data item from Branch 0, Position 0: (Vector 0,0,0)
Iteration two:
- Geometry: We take the data item from Branch 0, Position 0: (Point 0,0,0)
- Motion: We take the data item from Branch 0, Position 1: (Vector 10,0,0)
Iteration three:
- Geometry: We take the data item from Branch 0, Position 0: (Point 0,0,0)
- Motion: We take the data item from Branch 0, Position 1: (Vector 20,0,0)
etc.
A verbose explanation of the data matching in component B
Iteration one: - Geometry: We take the data item from Branch 0, Position 0: (Point 0,0,0) - Motion: We take the data item from Branch 0, Position 0: (Vector 0,0,0)
..
Iteration seven:
- Geometry: We take the data item from Branch 0, Position 0: (Point 0,0,0)
- Motion: We take the data item from Branch 7, Position 0: (Vector 0,70,0)
..
Iteration 27:
- Geometry: We take the data item from Branch 0, Position 7: (Point 80,0,0)
- Motion: We take the data item from Branch 2, Position 0: (Vector 0,20,0)
…
Albahari) > my favorite
The reference: C# Language specs ECMA-334
The candidates:
C# Fundamentals (Nakov/Kolev & Co)
C# Head First (Stellman/Greene)
C# Language (Jones)
Step 2: read the cookies (computer OFF)
Step 3: re-read the cookies (computer OFF)
...
Step 121: open computer
Step 122: get the 30 steps to heaven (i.e. hell)
Step 123: shut down computer > change planet
May The Force (the Dark Option) be with you.
…
3d.com/group/anemone). changing a gene will start the loop which iterates for 5 times. The loop will output a mesh at each iteration, however the one we want to optimize is the fifth result. So we have a gate that opens when the iteration reaches 5 and passes both the mesh and the objectives to Octopus.
When running Octopus, we are facing the following:
- Neither the labels of the axes nor the results are appearing in the Octopus solution space, although we have the axes names shown in the axes list.
- Though sometimes octopus is running, it is generating solutions more than the set population ( population of 5 individuals and 3 generation is generating more than 30 individuals and all in generation 1 ).
- Even when reseting Octopus doesn't reset. Octopus adds new individuals/phenotypes to the generation instead of starting fresh
- When it crashes the following is shown in the octopus error log (reduction of environmental selection did not yield the correct archive sizeError), ( I have seen the post by Stan regarding the same error message in the log).
I have attached a diagram showing the overall logic and a screen shots ( which I hope they are clear).
…
administration, education and consumption, the contemporary world can be increasingly conceived as a global and systemic environment. All our activities are profoundly influenced by a new condition of fluidity and interdependence of various and very often, unpredictable parameters and factors, introducing us progressively to a systemic and parametric understanding of the world and our position in it. Architecture and the building process are reflecting this new conception of the world by redefining themselves according to new principles and means. The fast development of digital techniques to simulate, represent and generate Architecture promises a continuous design process, including the seamless transfer of information between the involved parties and making performance a key issue in the planning process. In this process, concepts of adaptability, transformability and flexibility are replacing already tested and secure solutions, customization is replacing standardization and metrics, and digital tools are replacing analogue representations. In these new conditions the scaleless and the seamless appear as the two key pillars of the requested integration in contemporary architectural practice and education. Do the design and planning practices and construction industries respond with digital synergies to these new requests? Can the curricula of architecture schools escape from the dominance of traditional fragmentation within their structure and the organisation of the modules and academic units towards more holistic concepts and workflow? How can the traditionally separate courses offered by departments and modules of architectural education institutions be redefined in order to assure a scale-less and seamless thinking about form, materiality and its social and cultural representations, its environmental aspects and its urban and contextual references?
The organisers are inviting architects, teachers and researchers of architecture in Europe to present their views, research outcomes and teaching experiences related to the theme of the Conference.
An abstract of 600-700 words must be submitted by September 5, 2012. Please indicate into which of the five aforementioned themes your abstract falls. You will be asked to submit your final paper by the 22nd of October 2012 for the publication of the proceedings, which will be distributed to all EAAE/ENHSA school members.
For any further queries please do not hesitate to contact us on info@enhsa.net or info@scaleless-seamless.org…
me as our environment becomes more polluted.
Mushrooms may turn out to be important keys to both human and planetary health. Their indispensable role in recycling organic matter has long been known. Mycelium can be selected and trained to break down toxic waste, converting it into harmless metabolites. Mushroom allies may even be able to detoxify chemical warfare agents. The use of fungi to improve the health of the environment by filtering water in order to help trees to grow in forests and plants in gardens is one facet of a larger strategy called by Paul Stamets Mycorestoration.
The broader meaning of Mycoremediation is the process which fungi degrades or removes toxins from the environment. Mycoremediation practices involve mixing mycelium with contaminated soil, by placing mycelial mats over toxic sites. The powerful enzymes secreted by specific fungi are able to digest lignin and cellulose, the primary structural components of wood. These digestive enzymes can also break down a surprisingly wide range of toxins that have similar chemical bonds with wood.
BRIEF
Noumena, Green Fab Lab and Fab Lab Barcelona present “SYMBIOTIC ASSOCIATIONS” workshop. The purpose of the course is to explore the relationship between digital and biological manufacturing, as multi-scalar construction techniques. The Workshop will be based on defining a theoretical and experimental framework focused on the convergence between Digital Tectonics and Organic processes. We will focus on the association between biology and architecture in order to manufacture biological mechanisms.
Participants will focus algorithms based on recursive systems associated with organic and digital manufacturing. The Workshop will be divided into two main phases:
- Computational Phase: The students will explore digital iterative actions simulating biological growth.
- Manufacturing Phase: During this phase we will develop biological reactions, mixing Mycelium with other materials used in rapid prototyping, such as wooden PLA, Clay and biodegradable materials.…
t on my desktop using Window 7 and office 2007. In this case it works well with xlsx file. Could you explain more about your testing conditions?
>for cons if you remove one or more lines excel then your program starts Bug.
Do you mean if one row or column in Excel been removed, the component has error? I have try that on my system and I do not have error. Can you explain your detailed steps when you see the error?
>The memory in this case is not optimized, ie it line by line do something like that.
It is true memory is not optimized and to be frank I don't know how to do that. Could you explain more on your suggestion?
Thank you!…
Added by Xiaoming Yang at 8:10pm on November 23, 2011
st all the data I create. What I can do is split the analysis into chunks (I'm doing an annual environmental analysis, so I could work things out month by month, say, and only keep the results I need). However this throws up problems too. The issue now boils down to this:
If I run the following in Rhino (i.e. not using Grasshopper)...
import clr
clr.AddReference("mtrand")
import numpy
a = numpy.zeros(10000000)
...I have no problem. I don't reach the limit of addressable memory. But if I do the following...
import clr
clr.AddReference("mtrand")
import numpy
for i in range(10):
a = numpy.zeros(10000000)
...I run out of memory, even though you wouldn't expect more memory usage, as 'a' should be re-written each time. It seems that this isn't the case though, as I hit the memory limit and crash Rhino. It looks as though something's going wrong with the garbage collection?
Since posting, I noticed this document on EnThought's release page:http://www.enthought.com/repo/.iron/NumPySciPyforDotNet.pdf ... which on page 7 mentions a memory error IronPython can hit when arrays are created and discarded quickly. This looks like the problem I'm hitting, though I'm struggling to get around it. Re-writing my code to use the while-loop trick isn't practical, though I'm curious to understand the code that "exists in NumpyDotNet which will trigger a garbage collection run and wait for the finaliser queue to empty." Sounds like what I need to do, but I don't really know how to access what they're referring to - could you help me out??
Thanks again,
Rob…
ps://www.youtube.com/watch?v=uxYHlZQSADQ) - thats me doing the demo at around the 8-minute-mark :)
That was a truly gigantuan patch in the end, but its nice because its still in real-time. In the newer version there is much better rendering and a lot more options.
So if you have any questions or need some starting points you can always send me a message.
As for running in Parallels - unfortunately I don't think that will work too well. I have actually never tried it, but for anything realtime you want all the power available, so you have to run it in native Windows. I recommend Windows 7. I use Boot Camp on different Macs and it works really well that way. Mind you it doesn't like retina screens too much for the User Interface.
Be prepared for a bit of a learning curve, especially in the field of visual output, because unlike in GH, you have to actually build the whole render process. But there is many good examples and stuff in the forum on vvvv.org.
Maybe you know this book called "Generative Design" (http://www.amazon.de/Generative-Design-Visualize-Processing-Bohnack...), which is amazing, but designed for processing. Get this book, because its amazing and there are vvvv versions of most of the things in there!
If you speak german or probably even if not, then there is a great book called "Prototyping Interfaces", which is the only book about vvvv and shows a lot of great examples, which you can download.
In vvvv itsself in the addons (called girlpower), there is a ton of examples and you can press F1 on any component and it will open a help patch that shows you what it does and how it works.
Lastly I would recommend you print out the keyboard shortcuts cheat sheet and have it handy for reference. There is a ton of shortcuts and it will take a while till you know them because they are pretty obscure, but the more you know, the more fun it is to work with.…
Added by Armin Seltz at 3:29am on November 10, 2015
of Space, 1984) and specified in (Turner A. , “Depthmap: A Program to Perform Visibility Graph Analysis, 2007), intuitively describe the difficulty of getting to other spaces from a certain space. In other words, the higher the entropy value, the more difficult it is to reach other spaces from that space and vice-versa. We compute the spatial entropy of the node as using the point depth set:
(11)
“The term is the maximum depth from vertex and is the frequency of point depth *d* from the vertex” (ibid). Technically, we compute it using the function below, which itself uses some outputs and by-products from previous calculations:
Algorithm 4: Entropy Computation
Given the graph (adjacency lists), Depths as List of List of integer, DepthMap as Dictionary of integer
Initialize Entropies as List(double)
For node as integer in range [0, |V|)
integer How_Many_of_D=0
double S_node=0
For depth as integer in range [1, Depths[node].Max()]
How_Many_of_D=DepthMap.Branch[(node,depth)].Count
double frequency= How_Many_of_D/|V|
S_node = S_node - frequency * Math.Log(frequency, 2)
Next
Entropies [node] = S_node
Next
…