Jump to content

Wikipedia:Reference desk/Archives/Science/2008 October 21

From Wikipedia, the free encyclopedia
Science desk
< October 20 << Sep | October | Nov >> October 22 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 21

[edit]

Molecules that pass through membranes

[edit]

If a membrane allows molecules to pass through, you say it's permeable. What do you say of a molecule that is able to pass through a membrane? 00:48, 21 October 2008 (UTC)

permeative? —Tamfang (talk) 02:40, 21 October 2008 (UTC)[reply]
Permeative seems to be along the right lines but I would like a work that fits here "...inhibitors of superior affinity, specificity and membrane XXX" ----Seans Potato Business 08:30, 21 October 2008 (UTC)[reply]
small? I think it depends on the membrane. Wanderer57 (talk) 02:49, 21 October 2008 (UTC)[reply]
Small would be ambiguous and not really stress the point that I'm trying to make. Thanks anyway. ----Seans Potato Business 08:30, 21 October 2008 (UTC)[reply]
The stuff that passes through makes up the 'permeate'...

Dioxin in Plastic Container and Foam Container?

[edit]

I know that dioxin is highly carcinogenic. I heard that when you freeze a plastic water bottle, the dioxin is released into the water and when you heat a foam container, the dioxin is also released. Is this true or false? Sonic99 (talk) 00:59, 21 October 2008 (UTC)[reply]

Urban legend. Plastics for human use are not allowed to contain Polychlorinated dibenzodioxins and the loose term dioxin covers many different chemicals, most of which are not extremely dangerous. Graeme Bartlett (talk) 01:41, 21 October 2008 (UTC)[reply]
I read the Polychlorinated dibenzodioxins article and it says that the dioxins are present in minuscule amounts in plastics. Sonic99 (talk) 03:41, 21 October 2008 (UTC)[reply]
On snopes see[1]. Graeme Bartlett (talk) 05:22, 21 October 2008 (UTC)[reply]
How funny: someone just sent me that email yesterday (taken 6 years to make its way to me it seems). I'd already deleted it on the basis it was nonsense, but I've just dug it out again and can confirm it was that one! Gwinva (talk) 07:30, 21 October 2008 (UTC)[reply]

Stay or go?

[edit]

If you are at Point A and need to get to Point B as soon as possible, and will not learn the location of Point B for another hour - are you better to set off in a random direction or to wait for directions? Does it make any difference if you need to get to Point B for a set time (more than a hour in the future), rather than as soon as possible? —Preceding unsigned comment added by WAYB (talkcontribs) 10:12, 21 October 2008 (UTC)[reply]

It depends on the abstractions used and the topology of your space. Assuming that you can start traveling instantly, and that you are moving on a perfect plane where you can choose any route, you are just as likely to move away from your target than towards it. So you have no gain. But if you know that your point B is 90 minutes travel time away, and you will only get the information 60 minutes prior to your deadline, you can at least improve your chances of making it from 0 to something - essentially, if you stay put, you will never make it, but if you guess right, you can reach point B. If you guess wrong, you are no worse of than before, as you would not have made it anyways. In a less abstract setting, if you start moving now, at least you know that you have the luggage and kids bundled into the car, the engine starts, and you did not overlook that wheel clamp ;-) --Stephan Schulz (talk) 11:29, 21 October 2008 (UTC)[reply]
(edit conflict) The concept of a point being in a "random" position in a plane does not make sense—you have to give a (joint) probability density function. So, for example, the location of Point B might have a uniform distribution in a circle centered at Point A. To put it another way, if you want to assume that B is equally likely to be in any direction and distance from A, you have to give a maximum distance. As it stands, the problem is meaningless. (Also, you might want to consider moving this to the mathematics desk; in fact, you might find the section titled "Points on a plane" interesting.) « Aaron Rotenberg « Talk « 11:42, 21 October 2008 (UTC)[reply]
Let's consider the more tractable 1-dimensional case. Suppose you are at point 0, and you know that in one hour you will be told to go to either point a or point −a, with equal probability. Suppose you set off and travel to a point d where d may be positive or negative, but |d| is less than or equal to a (it is obviously counter-productive to go past a or −a). When your destination is announced, you may have to travel a further distance e = ad, or, equally likely, you may have to travel a further distance e = a+d. The expected value of e is therefore a; in other words, it makes no difference whether you set off in one direction or the other, or how far you travel before your destination is announced.
However, suppose you are told that your destination will be a point chosen at random (with uniform distribution) between −a and a. Once again, you set off and travel to a point d where |d| <= a. On one side of you is a line segment with length a−|d|; the probability that your destination is in this segment is (a−|d)|/2a, and the average further distance to travel is them e = (a−|d|)/2. On the other side of you is a line segment with length a+|d|; the probability that your destination is in this segment is (a+|d|)/2a, and the average further distance to travel is them e = (a+|d)|/2. So the expected value of e is now:
Now you can minimise the expected value of e by choosing to make d = 0 - in other words, don't move until you know your destination. Gandalf61 (talk) 12:19, 21 October 2008 (UTC)[reply]
The above derivation sounds like a version of the Monty Hall paradox where foreknowledge of an outcome changes the odds of that outcome. If I understand the implcations of Gandalf61's solution, then if someone (not you) already knows where your destination ends then it doesn't matter if you leave or stay, the odds are equally as good that you will end up closer or farther. However, if the destination is chosen AFTER you have made the decision to stay or go, then it is better to stay. Is that right? --Jayron32.talk.contribs 13:05, 21 October 2008 (UTC)[reply]
That's incorrect, as the "destination chosen after deciding" case still uses no knowledge about what your stay/go decision is. Monty Hall varies from this because that additional knowledge comes into play. — Lomn 13:16, 21 October 2008 (UTC)[reply]

Thinking about this problem in 2-dimensional space, I think it would always be better to stay than to leave early. Think about it as the intersection of two circles. Circle "a" has its center at your destination and its radius equal to the distance between your starting point and your destination. Circle "b" has its center at your starting point and the distance you travel before you arrive. Now, look at circle "b". The area of circle "b" that lies outside of the overlap between the two circles is the "bad" area; this area is essentially everywhere you can reach that is farther from your destination. The overlap area is the "good" area, because this area is all closer to your destination. For any distance less than the minimum distance between your starting and end points, the area of the "bad" part of circle b will exceed the area of the "good" part of circle b, thus you have a greater chance of wandering into a point farther from your destination than closer to it... --Jayron32.talk.contribs 13:12, 21 October 2008 (UTC)[reply]

If B is in a totally random location relative to A - then if you first walk in some random direction and arrive at C when you finally find out where B is - then the vector C-B is no more or less random than A-B - so there is no point in leaving A - and no benefit either. However (as others have obliquely mentioned) it's unlikely that B is TRULY randomly placed compared to A. This matters. For example...suppose B is one hour away from A and you'll find out where B is in one hour from now. If B turns out to be due north of A then when you walked off to C, if you walked for an hour on a heading anywhere between +60 and -60 degrees of due north then you'll arrive at B sooner than if you'd stayed at A. That's a 120 degree range of directions. But if you walked off in any other direction for an hour - then you'll be further away from B when you finally find out where it is. So in that case (a 1 hour wait for information followed by a 1 hour walk from A to B) - then walking in a random direction while you're waiting has only a 1 in 3 chance (360 degrees divided by 120 degrees) of saving you any time - so you shouldn't do it. Envisage a triangle A-C-B. The length of the A-C edge is the wait time before you get instructions, the A-B line is the time to walk from A to B. In order to 'win' the line C-B has to be shorter than A-B for whatever A-C turns out to be. For all 'winning' triangles C-B has to be shorter than A-B and therefore the angle C-A-B must be less than 90 degrees. So no matter what the 'wait' time and 'walk' times are - there the range of headings you can walk in MUST be less than 90 degrees either side of A-B. That means that the total 'arc' of winning headings is always less than 180 degrees - and therefore you ALWAYS have a worse than 50/50 chance of winning - no matter what the 'wait' and 'walk' times are. So you should NEVER leave A...it can't ever give you an improved probability of doing better.
So if "winning" means that your average arrival time at B is shorter - then your best plan is to remain at A. However, there are other versions of "winning". Suppose there is someone dying at B - the search party is out there - and they are going to call you and tell you where B is so you can go rescue them. If you know that they'll die if you don't get there within an hour - and suppose you also know that B is definitely more than an hour from A because you'd already be able to hear him shouting for help if he was less than an hour away. In that case, your probability of winning if you wait at A is zero. So setting off in a random direction gives you at least some probability of success - so reluctantly, you must set off at random. Doing that makes your average arrival time at B much worse - but the probability of getting there within an hour is better than zero - so setting off in a random direction is better than losing for sure.
SteveBaker (talk) 13:37, 21 October 2008 (UTC)[reply]
I'm not sure what "truly random" really means, but if it means "uniformly distributed over the entire plane" then it's not just unlikely, it's impossible. Your density function ends up being identically zero which means the point isn't anywhere. --Tango (talk) 15:06, 21 October 2008 (UTC)[reply]
Now I got confused. I thought I had grasped the concept of Almost surely, but your statement that the point isn't anywhere contradicts what I thought I had understood. Isn't it correct to state that for any given location in the plane, the probability is zero that B will be there, i.e. it almost surely isn't there? I don't see a contradiction with the point being somewhere, even if the probability of it being in any finite part of the plane is zero. I'd appreciate if someone would clear this up for me. --NorwegianBlue talk 19:27, 21 October 2008 (UTC) [reply]
"Almost surely" is used to describe things like the probability of getting get 0.5 when you have a uniform distribution over the range 0 to 1. There are infinitely many possible values between 0 and 1, so the chance of getting a specific one of them is basically zero. However, the chance of getting somewhere between 0.4999 and 0.5001 is non-zero and that's what we're normally interested in. If the uniform distribution were to go over the entire real line then even that chance would be zero, as would the chance of it being in any finite range. If you add up all those 0's you still get 0 (), which contradicts the requirement that the total probability always be 1, which is why such a distribution can't exist. --Tango (talk) 00:23, 22 October 2008 (UTC)[reply]
That's a limitation with the mathematics of probability, and does not preclude the concept of choosing an arbitrary point from the plane (especially for subsets of finite area). Though the probability density function goes to zero, one can still find non-zero expectation values and other probablistic operations by tiling the space in discrete regions of finite area and using those to perform probabilistic operations in the limit as the size of those regions goes to zero. Dragons flight (talk) 17:38, 21 October 2008 (UTC)[reply]
Dragons flight and NorwegianBlue, it is possible to choose a random point on the plane, but not in a way that's uniformly random, that is, that doesn't favor any point over any other. -- BenRG (talk) 20:27, 21 October 2008 (UTC)[reply]
Can we (once we formulate this problem precisely) profit by considering the limit of more and more diffuse distributions for B, or do the limits (A) always diverge or (B) inextricably depend on our choice of distribution, so that we're not really being impartial? --Tardis (talk) 23:30, 21 October 2008 (UTC)[reply]
If you take this as a purely theoretical math problem on an infinite plane - then point B can be anywhere from 0 to infinity hours walk away in any direction. The average distance from A to B and from C to B is therefore infinite and walking to point C can't possibly make any difference to the average time it takes to get to B (which is infinite, no matter what). Here in the real world, the distance to B is typically going to be constrained in some way - which means that for any real application - the infinite case is of no interest or relevence whatever. Back here in the real world, there is meaning to the question - but the answer turns out to be the same - you can't win by walking in a random direction while you wait...unless "winning" means something a little non-obvious as I discussed earlier. SteveBaker (talk) 00:02, 22 October 2008 (UTC)[reply]
Of course, if we realise that we're actually on a sphere, not a plane, it becomes a little easier. You can have a uniform distribution over a sphere and none of the distances are infinite. Of course, the conclusion will still be that there is no point moving, since a sphere is homogeneous so there is no difference between the point you started at and the point you moved to - they will both be equally good places to get to point B from. --Tango (talk) 00:13, 22 October 2008 (UTC)[reply]

Columbia vs. Challenger disaster

[edit]

Is it just my impression or the causes of the Space Shuttle Challenger disaster were terribly similar to those of the Space Shuttle Columbia disaster? Did any commission also come to this conclusion? Mr.K. (talk) 11:23, 21 October 2008 (UTC)[reply]

  • I don't know. Is that your impression? But no, the causes aren't similar, except in the broad general sense that both spacecraft suffered a catastrophic failure. The Challenger was destroyed because an O-ring seal failed in its right solid rocket booster during launch, which led to the liquid hydrogen fuel exploding, whereas the Columbia was destroyed because a piece of foam insulation broke off the main propellant tank and damaged the shuttle's thermal protection system, causing the shuttle to disintegrate during atmospheric re-entry because the heat destroyed the shuttle's right wing (which didn't do any favors to the rest of the shuttle). They aren't similar incidents, and I doubt any commission ever came to a conclusion that they were. -- Captain Disdain (talk) 11:34, 21 October 2008 (UTC)[reply]
  • (ec) Not to my knowledge, and I would be surprised if. Challenger exploded on the way up because of problems with the booster section connections. Columbia disintegrated during descent due to prior damage caused by insulation foam falling onto the leading edge of the wing. Of course both point to failures in the safety processes within NASA, but that is a trivial commonality that affects all but sheer freak accidents. --Stephan Schulz (talk) 11:38, 21 October 2008 (UTC)[reply]
Yes, I meant the safety processes within the NASA: a management team that underestimated the risks and pushed to go on and an engineer team that calculated reliably the risks and wanted to go other way. I don't believe that all accidents are caused through these tensions (technical vs. non-technical staff).Mr.K. (talk) 12:11, 21 October 2008 (UTC)[reply]
Well, a lot of comparable disasters can be traced back to a situation where someone had to approve an additional expense and said "no, it's good enough", even though there are engineers who say that it'll be safer if they get to spend the money. That's not necessarily a sign of bad management in itself; there's a difference between ignoring a known fault and ignoring a known risk of a fault, for instance. Still, it's certainly safe to say that NASA's organizational culture contributed to both accidents; the Columbia Accident Investigation Board did state that NASA had failed to learn enough from the Challenger disaster. Perhaps that's what you're looking for? But there's a really major difference between the actual cause of a disaster and the culture that has enabled such a cause to exist. Certainly, it can be said with justification that NASA should have learned more from the Challenger disaster, but even if they had, that wouldn't necessarily have prevented another accident. It's difficult to show cause and effect like that; things like operating budgets and changing administrations and whatnot have a huge impact on how an organization like NASA operates... Which, I should stress, isn't a valid excuse for negligence. -- Captain Disdain (talk) 13:04, 21 October 2008 (UTC)[reply]
You can view these disasters on two levels - there was some kind of an engineering/design error - and the necessary oversight/management to prevent such errors was not present. Viewed as an oversight/management failure - then, yes - these were essentially identical. Viewed at an engineering level, no - they were totally different. In one case, the O-rings of the SRB had insufficient flexibility to seal the solid propellant at low air temperatures found at the launch site - in the other case, chunks of foam insulation falling from the external fuel tank on takeoff hit a part of the leading edge of the wing - allowing hot gasses to jet into the wing structure during reentry. From an engineering perspective - those could not be more different. But from a managerial point of view - in both cases, the problem was known about in advance. Engineers had complained about launching in temperatures below which the O-rings would function adequately - and those warnings had been ignored. Chunks of foam and ice had been observed falling off of the external fuel tank many times in the past and nobody had funded a serious study to investigate the damage they might cause to the orbiter. In both cases, NASA failed to address safety concerns in a timely manner - and THAT was an organizational failing that was largely identical in the two cases.
I have seen this kind of thing happen only once in my career - where I'd spotted a serious problem and management ignored my complaints. This was when I worked in flight simulation. We had a requirement from the customer to provide the pilot with a simulator with a 100 degree field of view in the graphics display. This required us to mount a large monitor in front of the pilot such that it subtended an angle of 100 degrees at his head. Sadly, management decided to save money and ship a smaller monitor - but to continue to display 100 degrees worth of graphics on it - creating a slight "fisheye" distortion on the image (very slight as it happens). I did the math and realized that this distortion would cause the pilot to mis-judge his speed by about 5% if he used visual cues alone. If he learns to fly (and in particular, to land on an aircraft carrier flight deck) with a 5% error in his perception of speed from visual cues - then when he comes to land a real aircraft, he'd come in 5% too fast (assuming he's looking out of the window rather than at his instruments). This would be a potentially life-threatening thing - so I urged management to pay the extra for the larger monitor - for the sake of safety. Needless to say, they ignored my increasingly strident warnings because it save a couple of thousand dollars on each of a hundred or so simulators.
In that case, I eventually walked into my immediate bosses office with a formal-looking one-page statement of my position (along with all of the math) and I asked him to read it and sign at the bottom to SAY that he'd read it - so that my ass would at least be covered. Then I refused to leave his office until he did so. This got his attention and the problem was addressed rather quickly after that. But in some corporate and government cultures - what I did would have gotten me fired. In places like NASA - where safety is really a huge issue - people who find problems need to be listened to - and rewarded for their discoveries.
SteveBaker (talk) 12:56, 21 October 2008 (UTC)[reply]
There's a third level too: the macro-level design and political level. The shuttle embodies a bunch of design decisions unlike any other spacecraft, and reflects political compromises that conflict with purely technical demands. One could argue (and the return of NASA to very un-shuttle-like designs might indicate they'd agree) that some aspects of the shuttle platform are intrinsically unsafe (or are too expensive to really make safe) and that the real errors were made when the system was specified. These include:
  • building a reusable system when no reusable orbital spacecraft had ever been built; this was motivated by the theory the shuttle would be a low-cost "space truck", but in practice it turned out to be as or more expensive than comparable launchers
  • combining crew launch with heavy lift; Constellation deliberately separates these out
  • reported conflicts with the Air Force over its requirements
  • the solid-booster strapped to giant scary liquid bomb configuration; with a "stack" configuration you have some chance that an explosion might not destroy the payload, with the sandwich configuration that's much less likely
  • building the SRBs at Thiokol's plant in landlocked Utah meant that the SRBs had to be sectional for shipping, and thus had to have o-rings between the sections. Had they been build somewhere on the coast or a major river they could have been shipped to Florida by sea, and thus the designers would have the option to have the SRBs be all one piece.
-- Finlay McWalter | Talk 13:16, 21 October 2008 (UTC)[reply]
Plus the US space program seems to have a great fondness for making new stuff, even though the old stuff worked okay. Bar the very questionable reality of reusability, it's not clear what STS does that they couldn't have reengineered Saturn to do. And I really don't understand what Ares will do that Delta IV Heavy doesn't do or couldn't be improved to do sooner. The Russian programme seems to have been philosophically much more incremental and conservative (which I guess explains both why their rockets look so old-fashioned, and why they're more reliable). -- Finlay McWalter | Talk 13:31, 21 October 2008 (UTC)[reply]
I read some time after the Challenger disaster that it was caused by the fuel in the Solid Rocket Booster becoming brittle. This was a result of the cold temperature before launch. The fuel is made to have a rubbery consistency so that vibration during burning will not cause it to crack. However, the cold temperature made it lose its resilience.
The report I read said that cracks probably developed in the fuel mass because the fuel had become brittle. This allowed burning to take place along the cracks toward the outer wall of the RSB. The burning enlarged and extended the cracks. One crack reached an O-ring and burned its way through it. The pressure within RSB forced a jet of hot gas to exude from the side of the RSB, resulting in the disaster.
The brittleness may not have extended all the way through the fuel tube. Possibly only the outer portion became cold enough to be brittle.
I have also read that after earlier launches, 17 of the RSB sections recovered from the ocean showed signs of burning where part of an O-ring had burned through. It was not a problem on those occasions because the fuel had burned evenly. By the time burning reached the O-rings, the fuel was practically all expended and pressure within the RSB had dropped to a low value. AndMeToo —Preceding unsigned comment added by 98.17.45.184 (talk) 18:07, 21 October 2008 (UTC)[reply]

The similarity is not that the same component failed in the two aerospace disasters, but that both were "System accidents:" "The unanticipated interaction of multiple failures" in a complex system. Charles Perrow said it was when "Two or more failures, none of them devastating in themselves, come together in unexpected ways and defeat the safety systems." Such failures can be technical or organizational. See also [2] and [3]. Edison (talk) 19:35, 21 October 2008 (UTC)[reply]

Silicon Chips

[edit]

I'm curious to know - What is the property of Silicon chips which makes them so useful as integrated circuits,microchips,etc?

Thanks! —Preceding unsigned comment added by 89.100.217.103 (talk) 15:23, 21 October 2008 (UTC)[reply]

I believe it's high conductivity and low cost that are most benefitial. It's also incredibly abundant in the Earth's crust so we're not running out in a hurry. There are probably other reasons. —Cyclonenim (talk · contribs · email) 15:35, 21 October 2008 (UTC)[reply]
"Conductivity" is correct, but "high conductivity" is not. Rather, silicon is a widely-available semiconductor. — Lomn 15:44, 21 October 2008 (UTC)[reply]
Indeed. The important property is that Silicon is a semi-conductor with a very regular crystal lattice, so it's properties can be changed via doping. This makes it possible to construct transistors directly on/in the chip. Being widely available and fairly benign in general properties also helps. --Stephan Schulz (talk) 15:50, 21 October 2008 (UTC)[reply]
Another property of silicon that gave it an advantage in the early days of integrated circuits is that if you put it in an oven with some oxygen, it grows a nice layer of silicon dioxide on the surface that makes a good insulator. Creating an insulator on other semiconductors is more complicated. In recent years the processes have become so complicated that I'm not sure this convenience is a decisive advantage any more. --Gerry Ashton (talk) 03:09, 22 October 2008 (UTC)[reply]

Can the process of transesterification be duplicated?

[edit]

Can the process of transesterification be duplicated? —Preceding unsigned comment added by 207.166.31.13 (talk) 16:53, 21 October 2008 (UTC)[reply]

I'm not sure what you mean by "duplicated" but we have an article, Transesterification, if you haven't seen it. --Tango (talk) 17:07, 21 October 2008 (UTC)[reply]
Do you mean perhaps reversed? Mr.K. (talk) 12:09, 22 October 2008 (UTC)[reply]

About a graph

[edit]

Hi! There was a question in my textbook that asked about the energy v/s time graph of a ball, that moved on a frictionless floor and kept on colliding inelastically between 2 parallel walls.[coefficient of restitution= 'e'(<1)] My teacher taught that it would be a rectangular hyperbola(I believe that's what its called, and I don't know how to draw a graph in here), because energy decreases exponentially, ie., Initial E= 1/2 m v2 After 1st collision, E= 1/2 m e2v2 After 2nd collision, E= 1/2 m e4v2 and so on..... My doubt is, how can the graph be a continuous hyperbola? When it moves between the walls, as it faces no resistance or friction(it's an ideal theoretical situation, in the question), its energy does not decrease with time, and shouldn't it be a straight line there, and since the time taken for a collision is very small compared to the time taken to traverse the distance between the walls, at that point there should be a sharp drop in the energy?? —Preceding unsigned comment added by 116.68.77.73 (talk) 17:23, 21 October 2008 (UTC)[reply]

You are correct, it will be a straight line while it is between walls. How sharp the drop is will depend on the details, but it will still be smooth and continuous, you won't have vertical lines in the graph. --Tango (talk) 18:01, 21 October 2008 (UTC)[reply]
In the real world - there could never be discontinuous jumps - but that's because in the real world there could never be a perfectly inelastic collision. If you're going to claim some hypothetical (but impossible) perfectly inelastic collision - then impossible perfectly vertical jumps in energy come with the territory. The graph would approximate a rectangular hyperbola if you imagine the gap between the walls to be just a hairs-breadth wider than the size of the ball. The little vertical stairsteps would be very little indeed as the ball travelled in ALMOST a straight line with just a slight rattle from side to side. In the real world - with somewhat elastic collisions and at least SOME friction with the floor - the approximation might be rather good. SteveBaker (talk) 23:51, 21 October 2008 (UTC)[reply]

Thank You ! —Preceding unsigned comment added by 116.68.77.73 (talk) 01:54, 22 October 2008 (UTC)[reply]

Trees on a slope

[edit]

I've noticed something interesting about trees on sloping hillsides: some grow roughly perpendicular to the slope(slanted relative to level ground nearby), but some stand more or less upright(relative to level ground, and pointing to the zenith). There doesn't seem to be a consistent difference between tree species as far as I can tell, but what genetic or environmental factors could be involved here? 137.151.174.128 (talk) 19:39, 21 October 2008 (UTC)[reply]

The principle is called tropism, which is the growth response of plants to their environment. Roughly speaking, factors in an environment such as light, soil composition, moisture, or gravitational changes (such as an unstable, sliding hillside) can all effect the way in which a plant grows. The Tropism is article expalins the general trends, and will lead you to specific articles about different types of tropism. --Jayron32.talk.contribs 19:52, 21 October 2008 (UTC)[reply]
Tropisms generally cause a tree to try to grow vertically, but plastic flow of the soil down an extreme slope can move the inclination of the base of the tree. The top should still keep angling toward the vertical. The tree which is vertical roots to top might be on a slope where the soil is stable, for whatever reason, like boulders with the tree growing in a pocket of soil. Edison (talk) 04:14, 23 October 2008 (UTC)[reply]

Mylar

[edit]

"Mylar" can refer to either metallized nylon or to (usually metallized) biaxially-oriented polyethylene terephthalate film. If I've got a sample of material, how can I tell which it is? --67.185.172.158 (talk) 22:19, 21 October 2008 (UTC)[reply]

Stick it in your cassette recorder. The metalized nylon wont record--GreenSpigot (talk) 23:02, 21 October 2008 (UTC)[reply]
Run current through it and see which one has the highest resistance. Metallized nylon will conduct, the real Mylar won't.[4] Mac Davis (talk) 03:40, 23 October 2008 (UTC)[reply]

Double Suspension Gallop

[edit]

What is the difference between a Single and a Double Suspension Gallop —Preceding unsigned comment added by 98.244.174.49 (talk) 23:59, 21 October 2008 (UTC)[reply]

In short, an animal running in a double suspension gallop has two points in each stride where all four legs are off the ground - once during the extended phase and once during the contracted phase. See this site for some illustrations. (It's the first one I came across.) -- Tcncv (talk) 01:06, 22 October 2008 (UTC)[reply]
I thought this might have something to do with Conductor gallop and how many maxima and minima appear between suspension points.