.0001, the functionality is been put into dedicated components (see this post for further details).
Different branches are always combined using Longest List logic. I'm unhappy about this as well, I need to give more control over how different branches are combined, but I haven't figured out yet how to expose such functionality without it being utterly incomprehensible to 99% of the users.
If you want to ignore the data inside the fourth branch, you'll need to remove that branch before the data goes into the Line component. It's easy to remove a specific branch, somewhat trickier to make this removal dependant on variables elsewhere in the network.
You can use the Split Tree component to achieve this either way. Using a fixed mask (like in the image below) may be sufficient.
The !3 means that any branch is allowed except when it has a 3 in that location. The [0-2] means that only branches which have a number in between and including 0 and 2 will be allowed.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
d" floor side).
Rails are obviously defined with slope adjustments at start/end: Imagine a rail ramp curve made via, say, 20 control points: at start p0 is not moved, p1 is moved half the step .... p19 is moved half the step*17 and p20 is moved the full distance. Thus we have what is called "slope adjustment" in our trade.
a myriad of options controls where the spaghetti starts (curve.PoitAt(userControllableT)) what is the continuity mode (sequential or steady[shown]) and what type of profile is used for the sweep.
…
that ... blah, blah) but each of those cores will be running at lower speeds because of the thermal restrictions.
For instance, a dual core may have base clock speeds of 3.5 GHz for each processor while a quad core processor may only run at 3.0GHz. Focusing to single core (on each of them) the dual core processor will be able to about fourteen percent faster than on the quad core. Thus, if you have a program/app that is only single threaded (99% of what's available for AEC puproses to be honest), the dual core processor is actually better. Then again, if you have something that can (?) use (??) all (???) four processors such as the notorious Nexus rendering engine (Modo/Microstation/AECOSim), then the quad core processor will actually be about seventy percent faster than that dual core processor.
But no AEC engineer worth his name cares about rendering stuff, he he.
AMD and Intel have introduced technologies that can dynamically increase the speed of a processor core to help offset these differences between the dual and quad/octa core products. For instance, Intel may have the quad core processor with a base clock speed be 3.0GHz but when only a single processor core is in use at full load, that processor core will be boosted up to 3.4GHz. This would then make the quad core processor just three percent slower than a dual core processor that runs at 3.5GHz.
In general and theoretically, a multiple core processor is a "better" choice but that does not necessarily mean that you will better overall performance.…
switch this talking off-line if you are interested to know the real reasons in depth.
What is the pro way? Well ... imagine objects (blobs et all) that are placed in 3d space by some per object policy whilst their "property" (bend,repulse) is user controlled on a per object basis. Then imagine variants of all that spaghetti yielded (the rays, that is) stored in parameters in order to do the obvious : take control of all your previous attempts (replace, remove, swap, reset etc etc).
Get a 10-- minute thingy (straight out of my head: NO checks OF ANY kind performed [bugs possible], just a grid that shoots rays and a single blob (a sphere) that does the job). Not even a decent random policy is applied in order to have some nice looking rays (not to mention their directions).
Now ... imagine any collection of breps distorting the ray chaos: i.e. a ray meets a blob > is distorted (or not) > then meets another > ... > blah, blah (plus some policy for killing rays heading to Sahara instead of Vienna - but that's elementary).
This requires at least 2 hours of coding to do it properly (+ the variants "management" C#).
But ... well ... it could be a good real-life case when Solaronix "sponge" type of U/V collectors could be available (rather soon) > I'll do it > the future > the glory > the cash > the polar bears.…
gain profits falls in the latter category.
The challenge here is to do the job (up to a point) using "anodyne" ways at the cost of providing slow, incomplete and quite inefficient solutions.
That said this specific vault case requires addressing 4 "classes" of problems (for instance: regions due to ccx events or alternatively circuits in graphs etc etc).
Back to business:
Creating a realistic "random" W truss of that type is one of the most challenging tasks in parametric adventures (in fact ... is the top dog by some miles). One of the many issues is an approach to manage "on-the-fly" clash situations by individually modifying nodes (without been sure that you can arrive to an all overall valid solution). Since one "path" tried may yield dead-end(s) this means keeping track of your corrective actions in a hierarchical manner and been able to follow a different "path". Another (obvious) issue is to use instance definitions for all the "components" thus achieving almost real-time response (try to manage 100K++ "solids" [sleeves, cones etc etc] to see what I mean) ... etc etc.
The big thing is: what are you going to tell to your instructors about the required code part? (that 99% mentioned) And if a "complete" solution is primarily based on "black boxes" could - in the instructor's eyes - your Master Thesis qualify as yours?
That said Vaults_V1 is achievable solely via components.…
ce attractors
3- Relation between mathematics and Form
4- Network surface and Paneling
5- Fabrication methods (slice3d, nesting, ...)
6- Structure and Architecture (Millipede)
7-Energy and form
8- Islamic patterns
9- Physics with kangaroo
…
try now to integrate Geco in an interdisciplinary architectural engineering studio: hoping we can show you some nice applications of your tool, I'll keep you update and sending now details by e-mail. Here the file (very welcome to be shared). It most probably contais trivial errors by me, thanks for helping and giving some tip! Gr. Michela
FILE:
Ok, right, I see the outputs update correctly. Origin of problems must be in some different mistake I do:
- Incident radiation: I am not sure I understand what is going on: why I get so many 'not a number' ? (The Galapagos report is full of NaNs).
Bio-Diversity: 0.887 Genome[0], Fitness=NaN, Genes [89% · 44%] { Record: Too many fitness values supplied } ...
Genome[7], Fitness=NaN, Genes [74%] { Record: No fitness value was supplied } ....
Genome[9], Fitness=NaN, Genes [37% · 11%] { Record: Genome was mutated to avoid collision Record: Too many fitness values supplied }
- Daylight calculations: the geometry accumulates withouth deleting the previous models. As a consequance, results almost do not change after few varations (so, outputs get updated but do not vary). In current daylight definition: the first object being imported is the one where the grid has to fit; its setting makes it cancelling all the other objects during import. All the others, do not delete anything when imported. When running loops (manual or GA) that vary parameters, the entire geometry do not get cancelled - so I guess the loop does not pass back by the cancelling step, but imports only the geometry which has been varied by the parameters using the setting of that import component only? I will then try again by changing the order of the operations, but if you have specfic tips, let me know.
THANKS!
…
Simpsons episode were Bart goes into a mall and in the time he goes in and out of a shop all others have been turned into Starbucks.
I personally don't like it but you can't say they are crushing all competitors because, as far as i know, all owners of those software packages voluntarily sold their property for a good price. I would actually be more worried that an antitrust lawsuit was filed against Autodesk.
For example, this is what happened with Rockefeller's Standard Oil:
The antitrust case against Standard Oil also seems absurd because its share of the petroleum products market had actually dropped significantly over the years. From a high of 88 percent in 1890, Standard Oil's market share had fallen to 64 percent by 1911, the year in which the US Supreme Court reaffirmed the lower court finding that Standard Oil was guilty of monopolizing the petroleum products industry.[32]
The court argued, in essence, that Standard Oil was a "large" company with many divisions, and if those divisions were in reality separate companies, there would be more competition. The court made no mention at all of the industry's economic performance; of supposed predatory pricing; of whether industry output had been restrained, as monopoly theory holds; or of any other economic factors relevant to determining harm to consumers. The mere fact that Standard Oil had organized some thirty separate divisions under one consolidated management structure (a trust) was sufficient reason to label it a monopoly and force the company to break up into a number of smaller units.
To economists, "predatory pricing" is theoretical nonsense and has no empirical validity, either.
In other words, the organizational structure that was responsible for the company's great efficiencies and decades-long price cutting and product improving was seriously damaged. Standard Oil became much less efficient as a result, to the benefit of its less efficient rivals and to the detriment of consumers.
From: http://mises.org/daily/2317
(Beware, that site is very ideologically charged)…
points within the bounds of the site boundary and use each location as an attractor point controlling a variable at each point in the grid (radius of a circle/height of a cube/colour based on a gradient etc.).This would be based on proximity to the attractor points with the effect of each attractor point essentially scaled by the percentage associated with it. For example a location with 88% visitor rates would have a more dramatic effect than a location with 26% visitor rates.
I've had a bit of a play around but can't seem to get beyond the point of what is shown in basic point attractor tutorials online. I'm definitely a novice.
Here's how I figured it would be done:
1) Create a grid of source points within a boundary curve.
2) Select 18 pre-defined attractor points.
2) Measure the distance between the source points and the attractor points.
3) Invert this data so that variables increase with proximity rather than decrease.
4) Give each of the attractor points a strength value from 1-100% based on the visitor rates.
5) Use the scaled data to control a variable at each of the source points.
6) Create some way to control the drop-off rate of the effect from each point.
It is at step 3 that I get completely lost.
I hope my description is clear. Any help would be greatly appreciated,
Adam
…
ers and researchers, programmers and artists, professionals and academics who come together for 4 days of intense collaboration, development, and design.
The sg2012 Workshop will be organised around Clusters. Clusters are hubs of expertise. They comprise of people, knowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities.
Applicants to the sg2012 Workshop will select their preferred cluster from the following:
Beyond Mechanics
Micro Synergetics
Composite Territories
Ceramics 2.0
Material Conflicts
Transgranular Perspiration
Reactive Acoustic Environments
Form Follows Flow
Bioresponsive Building Envelopes
Gridshell Digital Tectonics
More information about the Workshop and Clusters can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=116&Itemid=131
The application process will close on January 15th, 2012.
Full Fee $1500
Reduced Fee $750
Scholarship Fee $350
Fees include attendance to both the workshop and conference from March 19th-24th.
Reduced Fee and Scholarships are available only for Academics, Students and Young Practitioners, and are awarded during a competitive peer review process.
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information. Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities. sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org
Follow us on Twitter at http://twitter.com/smartgeometry…
Added by Shane Burger at 12:29pm on December 13, 2011