rom this webpage (the first top download on that page):
https://www1.nyc.gov/site/planning/data-maps/open-data/dwn-pluto-ma...
The issue is that downloaded 'pluto_20v2.csv' file is enormous - 290MB.
I tried to delete about 95% of it, and approximately use only the first 5%.
I hope the definition provides some insight.If not, let us know.I couldn't upload the files for some reason. They are here:https://www.dropbox.com/s/grxuta8v8x18u70/mapData.gh?dl=0https://www.dropbox.com/s/5igqsdnryz9df2q/pluto_20v2%20first%205perc.zip?dl=0…
Added by djordje to Gismo at 2:21pm on April 15, 2020
is a exhibition building) generic outline (easy with GH), (b) real nested parametric part inclusion in the definition (hmm), (c) a GH ability to bake structured geometry to Rhino...and then Rhino (acting as a "companion" app to a given AEC app + FE analysis + cost analysis + ...) export properly structured data.
2. "Whole" and "Detail" here are tightly related : there's no meaning to promote an "idea" without solving the nuts and bolts of it. This is the so called "bottom-to-top" design mentality.
It's a mystery to me why GH doesn't include, say, some ways to control bake on a per block basis (actually on a per nested block basis).
…
a black hexagonal background. They are containers of parameters but parameters in themselves, like the "x" in a mathematical function. So, what I do is something like:
2) That depends exclusively on the panel, not the cluster. Then you can't. It is also not possible to assign access (item, list, tree) to the parameters.What you are trying to do, assigning components to the inputs directly, can only be done from code or using snippets. http://www.food4rhino.com/app/brick-box…
dro). The quality of the driver is also critical: hard to imagine NVidia working overnight to fix "some" driver bugs due to requests from gamers. Game cards are notoriously bad in dual monitor configurations.
3. A zillion of cores (triumph of marketing VS common sense) divided by the given clock rate ... gives you just ONE poor old core (Rhino/gh are single-threaded apps) that tries to do the job.
4. Single Xeon E5 2xxx V3 (the higher the clock the LESS the cores = better) would be my recommendation. ECC fast memory is also a must.
PS: Find a friend who operates a "loaded" H/P Z840 and test your defs.
…
Get plenty of RAM. Windows 32-bit can assign 2MB of Ram per process, so if you have lots of RAM, you can run Rhino+Grasshopper in memory all the way. I'd say get at least 4GB, and preferably 8GB. If you have a 64-bit machine, then it pays off to go even higher than that.
2) Get fast RAM. Memory access is the main bottleneck in many applications, so the faster the RAM the faster most apps will work.
3) Get a fast processor, rather than lots of slow processors. Only a few apps out there can truly use Multi-Threading (Rhino and Grasshopper cannot). These days, CPU manufacturers try and dress up multi-core CPUs as the next best thing. It is not. It is a lie. Until software can truly run on multiple cores there is no benefit to this. If rendering is a big part of your job, then it does pay off to have a multi-core machine though.
4) Get a good graphics card. I've always preferred NVidia over ATI, but there are many good ATI cards as well. You can go for a gaming card (they're cheaper), but note that these are optimised for drawing triangles. If you get a professional card, it will draw lines and curves much faster.
--
David Rutten
david@mcneel.com
Robert McNeel & Associates…
ents instead of code ... it could yield a nightmare of components (and a myriad of parameters). For real-life designs I would never attempt to do this without code.
2. A certain experience with Kangaroo (or some min surf other thing since using K on these ... well may be the killing a mosquito with a bazooka thing). That said I'm a great admirer of Daniel's work. But on the other hand why not?
3. A "certain" experience with trusses/space frames.
4. A "certain" experience with instance definitions (that's not doable with GH components).
5. Years of experience with parametric feature driven MCAD apps - Image35 (NX/CATIA) for designing the real-life parts (that have NOTHING to do with "abstract" concepts).
In total I would say that a similar "app" with code (excluding the min surf/mesh thing) would require 6-10 full days of work (or even more).
BTW: https://www.google.com/url?q=http://www.grasshopper3d.com/forum/top...…
r visual programming tools in the games world. MS's Kodu, looks interesting. Kismet and Visual3d look even more interesting..... mainly because they are more 'interactive' or 'reactive', rather than DAG-based.
Seems like the evolution path for GH-similar apps is:
1. base 3d or CAD app based on C/C++ code.
2. Add scripting language interface
3. Add some kind of visual interface
4. Add graph sorting / propagation engine
5. Re-jig base 3d or CADD app to make managed/interpreted scripts run faster, multi-threaded.
6. Add dynamic typed language, DLR stuff
6. ....
6. Add constraints solver...?
7. Rebuild CAD display engine to be procedural at the GPU level?
Seems like there are available tools for converting scripts into some kind of flowchart. There are even visual debuggers. MS even has something called the 'Debugger Canvas'. Spreadsheet constraints.
Seems like the time is ripe for lots of new apps like GH.
…
to control which part is allowed to connect to other parts and how. We will first look at the basics of defining rules, and then how to use the rule generator to create them automatically. Finally we will look at how to use connection types to allow more control on the final aggregation.
Video topics:
- Rules introduction: 00:27
- Rules basics: 4:25
- Using the rules generator: 13:22
- Using rule types: 16:58
Download the tutorial files here: https://bit.ly/wasp101_003
Watch the full playlist: https://www.youtube.com/playlist?list=PLCn3-_9Z4-E5A0EFluiMldlEbDufMiN1g
---
Download Wasp at: https://www.food4rhino.com/app/wasp
Wasp Newsletter: https://mailchi.mp/e0ccee5c4e32/wasp_newsletter
Source Code: https://github.com/ar0551/Wasp…
ess more memory on 64 bits. So you can load larger files and generate more data.
Every time you store something in memory it has to be stored at a specific location. We call this location an address. The first thing you store can be stored at address 0*. If that thing requires a total of 18 bytes, then addresses 0 through 17 are used up. The next thing you store can then be stored at address 18. And so on and so forth. At some point you run out of addresses and when that happens there is no more room to store any new data and there is thus nothing more that your app can do and that's usually when Windows shoots the application in the head and buries the remains behind the chemical sheds.
The total number of unique addresses that can be represented by a 32-bit integer is 4,294,967,295 (4 GigaByte). However Windows only allows a 32-bit app to access 2GB, or potentially 3GB if a special switch is set. A 64-bit application is allowed to use 64-bit integers to identify memory addresses, which means the highest possible address is now 18,446,744,073,709,551,616 (or 18.45 ExaBytes). Basically, as long as you have RAM to back you up, a 64-bit application will not run out of memory. Of course it may still become prohibitively slow as a lot of data requires a lot of computation and 64-bitness does absolutely nothing to make things go faster.
--
David Rutten
david@mcneel.com
Vienna, Austria
* Not true in reality, Windows will already use up some of the available memory just to load the application.…
Added by David Rutten at 1:39pm on November 2, 2012