思った感じになりません。
balls の代わりにplanarカーブを直接入れてみましたがエラーが出ます。
ファンクションにしてみたところ、forループので作った数値が反映されていません。
ファンクションのインスタンス?を出力していないと思い上記のようにしましたがエラーが出てしまいます。
以上の事から自分の認識が正しいのかよくわからなくなりました・・・
python自体の深いところをわかっているわけではないので余計こんがらがりました。
そこで、for b in ballsはどのような条件または使い方であれば使えるのでしょうか?
そして、上記のように別のオブジェクトに対しての使い方はどのようにすればできるのでしょうか?
2:同じファンクション内のdist = rs.Distance(self.pos,b.pos)についてですが
この文章も for b in balls によってbはBallのインスタンスであると定義?されたためb.posがbの位置であると分かるのでしょうか?
pythonは定義しなくても動いてしまうのでどのような時に使えるのか文章見ただけではよくわかりません・・・
大変細かいことかもしれませんが、よりpythonをしっかりと理解するためにも、どなたかわかる方ご教授いただけると幸いです。…
that both the ASHRAE and European Adaptive models were derived from surveys of awake occupants. While the topic has not been investigated as well as it should be, the few adaptive-style surveys of sleeping occupants that have been conducted show that people tend to desire significantly cooler temperatures when they are sleeping as opposed to when they are awake.
Notably, Chapter 8 of Humphrey's recently-published book on Adaptive Comfort (https://books.google.com/books?id=lOZzCgAAQBAJ&printsec=frontcover&dq=Adaptive+Thermal+Comfort+Foundations+and+analysis&hl=en&sa=X&ved=0ahUKEwi6npqSi__KAhUJMj4KHf7SCXMQ6AEIKjAA#v=onepage&q=Adaptive%20Thermal%20Comfort%20Foundations%20and%20analysis&f=false) provides some interesting insights into this. In a 1973 survey, Humphreys found that the quality of sleep started to deteriorate at temperatures above 24-26C regardless of the time of year and that there was no clearly-determinable lower limit to comfortable sleeping temperatures (in other words, people were fine at 12C if they were given enough blankets). He surveyed only British occupants who were sleeping in traditional beds with mattresses and a wide range of blankets. This is important because the nature of the findings is such that the comfort temperatures would be very different if the survey participants had been sleeping in a hammock or in closer contact with the ground (both popular practices for a number of cultures living in warmer climates). Traditional mattresses cut the ability to radiate body heat in half as compared to a standing human body and I would venture a guess that this is a big reason why much cooler temperatures are desired while sleeping on mattresses as opposed to standing awake/uptight.
So for your case, if you want to account for a time of the day that occupants are sleeping on mattresses, I would change the comfort temperature for this these hours down to 24C. Otherwise, if you are trying to show the comfortable hours of awake people in your space, your current 100% comfortable nighttime hours are a better estimate. I have also noticed that nighttime temperatures become comfortable in extreme weeks of hot/dry climates. This is what is happening in this extreme week simulation of Los Angeles' San Fernando Valley here:
https://www.youtube.com/watch?v=WJz1Eojph8E&index=3&list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
I will put in the ability to set custom values for comfort temperatures into the Adaptive Comfort Recipe soon so that you can test out a 'sleeping comfort temperature' if you would like. I have created a github issue for it here:
https://github.com/mostaphaRoudsari/Honeybee/issues/486
I was not so convinced by Nicol's argument about humidity on those pages as I was when I saw the correlations of both operative temperature and effective temperature to surveyed comfort votes in real buildings. Humphreys shows these correlations on page 106 of the book I linked to above. Notably, the correlation of Effective Temperature to comfort votes (0.257) is slightly worse than the correlation of just Operative Temperature (0.265). In other words, trying to account for humidity actually weakened the predictive power of the metric. This difference in correlation is not so great as for me to discount an Adaptive comfort model based on Effective temperature (as deDear once proposed). However, the correlations of PMV (0.213) and SET (0.185) to comfort votes are so poor that I now use the PMV model only with great caution.
This reason for the decreased importance of humidity may be multi-faceted, whether it's Nicol's explanation or another. Still, the data suggests that we are probably better off ignoring humidity when forecasting comfort and should only consider it when evaluating conditions of extreme heat stress where people's primary loss of heat is through sweating.
-Chris…
ntrol points in Rhino.
Also, I forgot to mention in part 1 that when doing the directional subdivision, depending on how you drew your input mesh, there is a chance that it gets divided in the wrong direction, and you end up with something like this:
Which is not what we want.
The simple way to fix this is with the MeshTurn component, which rotates the direction of each face by one side:
Now we can use physical relaxation to smooth our mesh. In this example I show a simple tensile relaxation, so it will be negatively curved, but the same principles can be applied to all sorts of surfaces by using different combinations of forces.
The definition for the relaxation is attached below.
There are 3 main groups of forces used:
Planarization
For the mesh to be able to unroll properly into flat strips, we want each of the thin rectangles to be flat.
Springs
I already showed how the WarpWeft splitting can be used to assign different strengths to control the shape of a mesh here. Now because of the uneven subdivision we have very different numbers of edges in each direction, so the strengths have to account for this. Depending on the level of subdivision used and the shape you want to achieve, you may need to set the Weft stiffness to be 10 to 100 times that of the Warp.
Edge Smoothing
Because our subdivided mesh has square ends, we might not want to simply anchor the boundary, so I've shown how we can force them to become more circular, while still staying in place. Each boundary curve gets pulled onto its best fit plane, while also applying bending to round it out, and springs to keep it from shrinking.
(This part could also be achieved in other ways, such as pulling the boundary vertices to a curve)
When we run this relaxation, the shape should smooth out to something like this:
Play with the tensions and boundaries until you are happy with the result, wait for it to stop moving, then stop the timer. (Remember it is very important to always stop the timer once the relaxation has finished, before continuing working with the output, as otherwise Grasshopper becomes very slow, because Kangaroo is constantly resolving, even if no movement is visible).
If you want to try other shapes than tensile surfaces, you could also use forces such as bending, laplacian smoothing, or pulling to some target surface to control the form.
Next - Part 3 splitting and unrolling
…
t of data it has to operate on. So only those aspects of the algorithm that differ in these cases are relevant.
For example if your algorithm always does exactly the same thing (let's say all it does is measure the size of an array and display it on screen) will be O(1), because it doesn't matter if you run it on an array containing 10 or 1000000 items. Measuring the size of an array is a constant-time operation:
Print(string.Format("Array contains: {0} element(s)", data.Length);
However if your algorithm works on not on arrays but on linked-lists, then it becomes an O(N) operation because counting all the elements in a linked list means you have to iterate over all of them. And the longer the list, the more iterations you need. In fact the number of iterations is exactly the same as the number of items. (ps. if you'd be using the System.Collections.Generic.LinkedList<T> class then it's still O(1), because apparently that particular implementation of linked lists caches the count and keeps it up to date.)
If you have a loop that runs for each item, and then inside that loop there is another loop that also runs for each item, then your complexity becomes O(N²). Or, in a similar case if your algorithm consumes two collections (N and M) and iterates over all items in N, and then inside that loop it iterates over all items in M, the complexity is O(N×M).
The case can be made that only the most severe complexity is relevant enough to report. For example if you have an algorithm that comprises of three steps, the first of which is O(log(N)), the second is O(N²) and the third is O(3ⁿ), then technically the total complexity would be O(log(N) + N² + 3ⁿ), however the first two parts are utterly insignificant compared to the third and therefore can be omitted entirely. Consider for example increasing the input size from 10 to 20 elements:
log(10) + 10² + 3¹⁰ = 1 + 100 + 59049 = 59150
log(20) + 20² + 3²⁰ ≈ 1 + 400 + 3486784401 ≈ 3486784802
As you can see the increase of the complexity is almost entirely due to the O(3ⁿ) portion, so much so that there's almost no point in mentioning the other two.
Now, your specific questions:
Constructors/declarations and method invokes are not necessarily O(1). In this particular case they are, but it is possible that some constructor you call may have a higher complexity. For example if instead of an empty List<T> you're constructing a SortedList<T> based on your inputs, then it definitely may be the most significant complexity in your entire algorithm and it needs to be taken into account.
Correct. A loop like this has complexity O(N), ignore stuff that only happens once like the declaration of the iteration variable.
I don't understand that line of code. cP is already a list. Why are you calling ToList() on it? In general making copies of memory-contiguous collections (like arrays or lists) can be done in O(1), depending on implementation, because blocks of memory can simply be duplicated or moved at one go using the correct hardware ops. However other times it will require a loop in which the complexity goes up.
It's very cheap to add items to lists, provided the list has enough space to add new items. By default a list is big enough to contain only 4 items. If you try and add a fifth one, the list will need to allocate more memory elsewhere, copy the 4 existing items into the newly allocated space and only then add a new item. So, if you know ahead of time how many items you'll be adding to a list (or even if you only know a theoretical upper bound), you should construct the list using that known capacity. This will speed up the process of adding many items to a single list.
Don't know how crypto providers work, but since this part of your algorithm does not depend on cp.Count or the magnitude of populationCount, it doesn't matter for the big-O complexity metric.
…
uts an instance of a class type, which I refer to as Class Instance A for this discussion post.
Component B, has a for-loop, through which it does a few things, and makes "n" number of variations (in this case, just 3) of Class Instance A. Simplified/generalized structure of Component B is as follows:
a=[]
for i in range (n):
.....variation = DoSomething (StartShape)
.....a.append(variation)
.....for obj in ghenv.Component.OnPingDocument().Objects:
..........if obj.Name == "MakeAssembly": ...............obj.ExpireSolution(False)
I am not sure if I am using the ExpireSolution method correctly here. I also have found this (Grasshopper.Instances.ActiveCanvas.Document.NewSolution(False)) from some other discussions, but it doesn't seem to solve this problem. Or maybe I have the placement of ExpireSolution or NewSolution wrong. Upon pressing F6 then F5, Component B keeps updating Class Instance A, instead of "resetting" it to its original condition. For-loop in Component B must have the original Class Instance A every time, in order for it to spit out variations. I need to find a way to expire the solution for Component A only, at the end of each For-loop. Eventually, I need it to loop 100+ times, so manually pressing F5 is not really an option... Below is an image of what should happen:
What is the best way to tackle this problem (in either Grasshopper-Python or IronPython), so that the same original Class Instance A is fed in every time the loop is run in Component B?
Thank you! …
aching my skill set here, but bare with me.
I want to create an animated facade of squares which rotate depending on a sequence of grey-scale images. I've got pretty far thanks to many discussions here, but have hit a blank with exporting my animated model to 3ds max.
Here's my GH script - it's a botch of 3 or 4 various things incorporating centipede at the start and end to get the animation.
All good and it works! It produces animations which I can sequence for presentations too thanks to it's bmp export, which is sort of a side-product.
What I have a problem is that the OBJs it produces error wildly when imported to max. eg in rhino it looks like
But when I've imported them to max it looks like
and as it animates it just gets longer and smaller.
NOW I reckon it might be because my model in grasshopper is 100 separate geometries and it'd like it to be a single one - but I've not achieved that.
Does anyone have any ideas how to solve this? My end result I would like to look like this rendered still from max, but animated.
Thankyou all! This also uses Firefly, so you might need that installed to see how my file works.
…
Added by chris parrott at 10:34am on September 11, 2015
been written about it and I manage to get both of them started with this:
(VB.NET)
Public Class Form1
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click Dim doc1 As String = "C:\Users\Xavier\Desktop\Test_GH\Test à changer.gh" Dim type As Type = Type.GetTypeFromProgID("Rhino5x64.Application", True)
'Start Rhino
Dim rhinocomobj As Object = Activator.CreateInstance(type) rhinocomobj.visible = True While rhinocomobj.IsInitialized() = 0 Threading.Thread.Sleep(100) End While
'Start GH and open a file
rhinocomobj.RunScript("_Grasshopper", 0) Dim gh As Object = rhinocomobj.GetPlugInObject("b45a29b1-4343-4035-989e-044e8580d9cf", "00000000-0000-0000-0000-000000000000") gh.OpenDocument(doc1)
From what I understood, this creates a separated COM object, but I don't know how I can manipulate it. For example, how can I get the RhinoDoc, the GH Document and so on?
I tried :
Dim RhinoDocument as RhinoDoc = RhinoDoc.ActiveDoc
but this throws an exception "System.dllNotFoundException" about RhinoCommon.dll, despite RhinoCommon.dll has been set as a Reference of the Project.
I tried as well to build a new class which sits inside GH (DLL as a reference + GHA inside GH library) as per David's suggestion: http://www.grasshopper3d.com/forum/topics/call-gh-from-c-code, and to access properties but I still get the same "DLL Not Found" exception whenever I use GH commands through this tailor made class.
At last, I tried using RhinoScript Interface through commands like :
gh.AssignDataToParameter, but this doesn't change the value, this doesn't throw any exception neither.
I would like to get full access in order to change parameters from a GH document, output geometries and DWG files, and so on.
I don't know if I am being really clear but any help would be really appreciated.
Thanks!
…
both my plotter/cutter and wide format printer. I had been running the plotter from my main work laptop - a Win10 machine via the plotters USB port. As it turns out you can't get Win XP drivers for this USB connection so I needed another solution.
I tried to use the plotters DB25 serial port connection using an old DB9 to DB25 modem cable I had in my collection = no luck the plotter wouldn't talk. A bit more research and it turns out these plotters need a 'null modem' cross over cable to operate. I found a pic of the correct wiring online and made up my own with some cable and connectors from the local electronics hobby shop.
With this hooked up and using Hyperterminal I was able to fire some codes to the plotter directly and get a response back - winning!
At this point I got my original code working with the 'net use' redirect from LPT1 to COM1.
HOWEVER - being that the plotter was now on a COM port there are a few more interesting things you can do with it - one is being able to read the paper size/cut area from the printer.
So what I needed to to was find a way to send and receive data to/from the plotter using the serial port.
A bit of research into .NET's serial port interface and using a bunch of small pieces of test code I have manged to completely re-jig this driver.
Upgrades include:
- Direct Serial Port comms using Null Modem cable (a USB to serial adaptor + null modem should also work)
- Plot area read from the plotter - a rectangle the size of the plot area is placed on a separate layer and coloured red
- Testing to see if selected plotting curves are both closed and inside of the cutting area - with errors shown and exiting if they are not right.
- After plot 'parking' of the plot head at the end of the cut items + an adjustable offset (currently requires manual resetting of origin on the plotter before for next cut)
Great thing is it is now 100% running within Rhino Python - no DOS command line calls = no flashing up of the CMD wind. Also no temp files needed on the HDD and no limit to number of curves that can be plotted - tested with 200 or so with no issues.
Overall very happy with whole project - have learnt a LOT about Python and .NET interfacing AND ended up with a very handy/useful tool.
Cheers
DK
# This code is a WIP # It plots directly to a DGI Plotter# via the serial port
import System.IO.Ports as Portsimport rhinoscriptsyntax as rsimport time
#Some setup valuescom_port = 'COM1' #change to match plotter port baud_rate = 9600 #change to match plotter settingplotter_step = .025 #mmfinsh_offset = 10 #mm
#Delete old cutting area and cut objectsif rs.IsLayer('Cutting Area'): rs.PurgeLayer('Cutting Area')if rs.IsLayer('Cutting Objects'): rs.PurgeLayer('Cut Objects')
#Setup Serial PortMyport = Ports.SerialPort(com_port)Port_Write = Ports.SerialPort.WriteMyport.BaudRate = baud_rateMyport.ReadTimeout=5000 #5 secsMyport.Close()Myport.Open()
#Setup PlotterPort_Write(Myport, 'PU;PA0,0;IN;\n')Port_Write(Myport, 'SP1;\n')Port_Write(Myport, 'PA;\n')time.sleep(2)
#Read the Paper size from PlotterPort_Write(Myport, 'OH;') #HPGL read limits codetime.sleep(2)
return1 = ''papersize = ''count = 0char_in_buffer = 0chars_in_buffer = Ports.SerialPort.BytesToRead.GetValue(Myport)
if chars_in_buffer == 0: print 'Plotter not ready' Myport.Close() exit()
while (count < chars_in_buffer): return1 = Myport.ReadChar() papersize = papersize + chr(return1) count = count + 1
papersize = papersize.split(",")rect1 = (float(papersize[2])*plotter_step)rect2 = (float(papersize[3])*plotter_step)
print 'Cutting area = ' + str(rect1) + 'x' + str(rect2)
#place cutting area curve on its own layer, make it red and lock itplane = rs.WorldXYPlane()cutting_area = rs.AddRectangle( plane, (rect1), (rect2))rs.AddLayer (name='Cutting Area', color=(255,0,0), visible=True, locked=True, parent=None)rs.ObjectLayer(cutting_area, 'Cutting Area')
#get plotting objects
allCurves = rs.GetObjects("Select curves to plot", rs.filter.curve)
#test to see if these are closed curves - exit if not
for curve in allCurves: test_closed = rs.IsCurveClosed(curve) if test_closed == 0: print "One or move of these curves are not closed" Myport.Close() exit()
#test to see if these are inside cutting area - exit if not
for curve in allCurves: test_inside = rs.PlanarClosedCurveContainment(curve, cutting_area)
if test_inside==0 or test_inside==1: print "One or more of these curves are outside of cut area" Myport.Close() exit()
#All ok - convert to points and send data to printer
rs.AddLayer (name='Cut Objects', color=(0,255,0), visible=False, locked=True, parent=None)
for curve in allCurves: Port_Write(Myport, 'PU;PA;SP1;\n') polyline = rs.ConvertCurveToPolyline(curve,angle_tolerance=5.0, tolerance=0.025, delete_input=False, min_edge_length=0, max_edge_length=0) points = rs.CurveEditPoints(polyline) rs.ObjectLayer(polyline, 'Cut Objects')
# PU to the first point x = points[0][0] y = points[0][1] Port_Write(Myport, 'PU' + str(int(x / plotter_step)) + ',' + str(int(y / plotter_step)) + ';\n') # PD to every subsequent point i = 1 while i < len(points): x = points[i][0] y = points[i][1] Port_Write(Myport, 'PD' + str(int(x / plotter_step)) + ',' + str(int(y / plotter_step)) + ';\n') i += 1
Port_Write(Myport,'PU;\n')
#find the far end of the cutbox = rs.BoundingBox(allCurves)far_end = str(box[1])far_end = far_end.split(",")far_end = far_end[0]far_end = float(far_end)/plotter_stepfar_end = (int(far_end))+ finsh_offsetfar_end = str(far_end)print (far_end)
#return plotter home and close portPort_Write(Myport, 'PU;PA' + far_end + ',0;IN;\n')Port_Write(Myport, 'SP1;\n')Port_Write(Myport, 'PA;\n')Myport.Close()time.sleep(10)…
and pioneers in the fields of architecture, design and engineering.
The event will be in two parts, a four day Workshop 15-18 April, and a public conference beginning with Talkshop 19 April, followed by a Symposium 20 April. The event follows the format of the highly successful preceding events sg2010 Barcelona, sg2011 Copenhagen, and sg2012 Troy.
The Challenge for sg2013 is entitled Constructing for Uncertainty.
more information
CONSTRUCTING FOR UNCERTAINTY
Design and construction, increasingly more information-centric, must also address issues of computational ambiguity. As users, we must drive computational systems to assume new roles and subsume more domains to meet the needs before us. We must consider issues of time and permanence within a cultural and technological landscape of constant change - our most grand gestures will define our environment physically, culturally and economically for generations.
Where historic responses to uncertainty constructed a simplistic environment with basic mechanisms for aggregation and subdivision, we augment these with smart, dynamic and interactive systems. Where modeling capacity has been limited, we now take advantage of vast amounts of data collected by sensing and scanning devices, processed by cluster or grid computing, filtered by machine learning algorithms into patterns, and communicated by ubiquitous devices. Our past data trajectories can guide us in discovering robust and tolerant design systems to meet the demands of a malleable present and uncertain future.
sg2013 Constructing for Uncertainty: transition computational design from the hard space of the ideal to the soft reality of an uncertain built environment.
more information
sg2013 WORKSHOPSThe SG Workshop is a unique creative cauldron attracting attendees from across the world of academia, professional practice as well as many of the brightest students. The Workshop is open to 100 applicants who come together for four intensive days of design and collaboration.
The annual Workshop is organised around Clusters. Clusters are hubs of expertise comprising of people, knowledge, tools, materials and machines. The Clusters provide a focus for Workshop participants working together, within a common framework.
more information
sg2013 TALKSHOPAfter four intense days of innovative work, Talkshop offers an opportunity for critical reflection on what has been accomplished in the Workshop. Talkshop will be an opportunity to open debates, pose questions, challenge orthodoxies, and propose new ideas.
Talkshop will feature informal and open discussions between Cluster participants, leading practitioners and emerging talents in digital design, offering inside perspectives on how the landscape of computational design is reshaping built form.
sg2013 SYMPOSIUMThe Symposium will examine the year's Challenge. Invited keynote speakers will showcase major projects and research from around the globe that mark out the territory of the year's Challenge. The Symposium is a unique opportunity to hear insights into the challenges ahead for the discipline.
Interwoven throughout the day will be reports and highlights from each Workshop Cluster, giving an opportunity to view work created during the previous four days of intensive collaboration, design and development.
sg2013 SCHEDULECall for Clusters 26 September 2012Cluster Proposals Due 4 November 2012Workshop Applications Open November 2012
Workshop 15 - 18 April 2013Conference 19 - 20 April 2013
More information about the event can be found at smartgeometry.org…
Added by Shane Burger at 10:35am on October 25, 2012