Jump to content

Wikipedia:Reference desk/Archives/Science/2011 February 18

From Wikipedia, the free encyclopedia
Science desk
< February 17 << Jan | February | Mar >> February 19 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 18

[edit]

NMR is a kind of flouresence?

[edit]

"Fluorescence is the emission of light by a substance that has absorbed light or other electromagnetic radiation of a different wavelength." vs "Nuclear magnetic resonance (NMR) is an effect whereby magnetic nuclei in a magnetic field absorb and re-emit electromagnetic (EM) energy."

Since radio wave energy is directed towards nuclei in NMR spectroscopy, and radio wave energy is reemitted from the nuclei, one could say that NMR is a kind of flourescence in the radio wave spectrum?

The term Fluorescence (note spelling) is typically reserved for light absorption and emission arising from electronic energy levels (and indeed, the first sentence of Fluorescence#Photochemistry bears this out) rather than nuclear effects, as in NMR.

L-Glucose

[edit]

Why haven't food companies replaced regular sugar with L-Glucose? It tastes the same as sugar but it can't be metabolized, so it can't cause weight gain (or cavities, for that matter, since bacteria can't metabolize it either). --75.15.161.185 (talk) 00:09, 18 February 2011 (UTC)[reply]

We have an article on L-Glucose, which says, "L-Glucose was once proposed as a low-calorie sweetener, but was never marketed due to excessive manufacturing costs." —Bkell (talk) 00:17, 18 February 2011 (UTC)[reply]
And from that same article, "also found to be a laxative". Quick google search suggests that it's sweet, but not "same sweetness per amount" so one would have to adjust recipes, and that it does not taste identical, so again would not be a simple swap-in replacement. While those aren't fatal flaws (all artificial sweeteners have similar concerns of sweetness, cooking qualities, solubilities, specific taste characteristics, etc.), the more problems there are, the less likely it is that someone will try to overcome them as well as try to commericalize a more expensive material. DMacks (talk) 01:05, 18 February 2011 (UTC)[reply]

Particulate Matter and Heat Convection [?]

[edit]

I have a bit of a 'thought exercise' concering heat transfer and how it works with such substances as smoke and dust. (It's a problem that is driving me crazy.)

Suppose that there is a solid cube that is open at one end; to wit, 5 of the 6 sides are made of a material that effectively keeps the cold out (eg. gore-tex or polyprolene). And let us suppose that the 6th (open) end of said box is filled up by a thin "screen" of thick smoke or dust making it impossible to see inside. Let us also suppose—for purposes of this exercise—that no matter how hard the wind blows, the "screen" of smoke or dust will not move or blow away; rather, it will remain a thin barrier between the inside and outside of said box.

Now, the temperature inside the box stands at a comfortable 72 degrees (22 Celsius), but the temperature outside the box is a chilly 25 degrees (-4 Celsius). And there is no internal heat source in the box.

Over time, would the temperature inside the box come to match the temperature outside? Or, would the barrier of smoke (or dust) prevent the loss of heat? Also, would the results be any different if the particulates in question were of different substances?

Thank You for reading this! Pine (talk) 00:24, 18 February 2011 (UTC)[reply]

Smoke which is opaque at the visible wavelength isn't necessarily opaque at infrared or other wavelengths, but, I guess, you mean for us to assume that it is. If so, then that would stop all radiation heat transfer, and, since this magic smoke screen somehow resists any attempt to part it, that would also stop convection. However, there would still be conduction, meaning the smoke particles adjacent to the inside of the box would be heated up, then would transfer this heat to particles farther out, by direct contact, until the outside smoke particles were heated up enough to heat the outside air. Conduction is normally quite slow relative to convection and radiation, but you'd still eventually end up with everything the same temperature. StuRat (talk) 00:45, 18 February 2011 (UTC)[reply]
Define "opaque". If the smoke is simply a black body, then it will absorb infrared radiation from the warm surfaces inside the box, and by direct contact with the air, and it will emit infrared radiation also. The radiation it emits will come out in all directions, thereby removing energy from the box. Similarly, smoke that merely scatters light from inside will not prevent energy from escaping. Now if it is both smoke and mirrors (say, a swarm of nanobot corner reflectors tethered in formation) then you have a mirror around a warm spot - essentially a space blanket, effective but not as good as a thermos, since heat is still readily transmitted by contact. Wnt (talk) 18:20, 18 February 2011 (UTC)[reply]

Aldehydes and ketones

[edit]

Why are aldehydes classified as a separate type of compound as ketones, even though they're just 1-ketones? --75.15.161.185 (talk) 00:42, 18 February 2011 (UTC)[reply]

That H allows them to undergo reactions that ketones cannot (example: oxidation to carboxylic acid) and they are generally less stable than ketones (so even the reactions they do that are "same as ketone" are often faster). The H also has distinct properties that have no analog in ketones (spectroscopic signals, chemical reactivity, etc.). At some point, it becomes progressively sillier to give overly specific names based on subtle differences ("ethyl ketone" vs "methyl ketone" perhaps?) but presence vs absence of a certain type of bond or atom is usually important enough to mention. DMacks (talk) 00:59, 18 February 2011 (UTC)[reply]

Organism that acts as an air pump

[edit]

This is a pretty odd question, but I'm hoping someone might have some idea. I'm trying to find some sort of organism (for research), of any size, single or multicellular, that somehow pumps air from one level to another, for example a social algae that removes CO2 from the atmosphere and deposits it underground. It is most important that it transports some sort of gas, though the more/more different types of gas the better. Also if it's a larger organism, structure isn't important, as the research will be built around the organism. Is there anything that fits this bill? Thanks! 64.180.84.184 (talk) 01:28, 18 February 2011 (UTC)[reply]

I don't know if this counts but bees will fan their wings in order to circulate air in the hive. Ants construct their structures with an eye to local winds in order to ventilate the nest. The Mudskipper will dig a hole and keep an air pocket (actually lots of animals will do that). There are animals that collect air bubbles for their nest. Like the Paper Nautilus [1] And here are some more. Ariel. (talk) 01:55, 18 February 2011 (UTC)[reply]
I wonder how much air you could get a diving bell spider to transport if you offer it unlimited food, but sneakily insert a tube into its underwater air supply. But rather than enriching UF6 I'd prefer using an electric eel... ;) Wnt (talk) 03:21, 18 February 2011 (UTC)[reply]

Hmm... I didn't even consider organisms that literally transported the air by moving! I don't think this will work though unless the process is passive. 64.180.84.184 (talk) 05:40, 18 February 2011 (UTC)[reply]

Blowfish. Cuddlyable3 (talk) 10:19, 18 February 2011 (UTC)[reply]
The gas glands of physoclistous fish pumps gases and can achieve remarkably high pressures. Page 45 gives an explanation as to how they work. Also, you can eat the experiment once you're finished.--Aspro (talk) 15:15, 18 February 2011 (UTC)[reply]
Fucus (with pneumatocysts) transport minute amounts of air. I imagine that with just the right environment you could do selection cycles for lumps of fucus that develop buoyancy in the least amount of time - who knows how far it could progress? Wnt (talk) 16:51, 18 February 2011 (UTC)[reply]
Algae often have pyrenoids which actively pump CO2 into them. The article is pretty useless, but this thesis has a lot of details. SmartSE (talk) 17:21, 18 February 2011 (UTC)[reply]
This post on a garden forum describe a slimy algae that rises and sinks in a pond during the day. I would imagine gas is involved in its changing buoyancy, possibly with exchange to the air. EverGreg (talk) 20:21, 18 February 2011 (UTC)[reply]

Permeability

[edit]

what is the value of magnetic permeability(μ) in CGS system of units?—Preceding unsigned comment added by Rk krishna (talk) 05:36, 18 February 2011

ummm, 1? 213.49.110.218 (talk) 05:45, 18 February 2011 (UTC)[reply]
(EC)The conversion factor from SI units is 4π x 10-7 according to this [2]. Mikenorton (talk) 05:50, 18 February 2011 (UTC)[reply]
This is the conversion from CGS EMU to SI, not from SI. SpinningSpark 01:22, 19 February 2011 (UTC)[reply]
For the record, this is why CGS is and remains awesomely awesome and you blokes who use MKS for your resistors and capacitors ought to be put into forced labor camps until such silly conventions are thoroughly beaten out of you. SamuelRiv (talk) 07:11, 18 February 2011 (UTC) [reply]

I changed the section title for easier reference. Cuddlyable3 (talk) 10:14, 18 February 2011 (UTC)[reply]

There are (at least) two versions of the CGS system when it comes to electromagnetic quantities: namely the electrostatic (ESU) and the electromagnetic (EMU) system of units (there are others, but they have the same result as EMU for permeability). In the EMU system, as stated by 213.49.110.218 above, the value of μ0 is 1. In the ESU system, however, it is approximately . See this table. SpinningSpark 15:41, 18 February 2011 (UTC)[reply]

fridge

[edit]

how come the engine dosent burn out on those open fridges in stores that have yogurt ect theres no door

They make the engine larger so it can handle the increased demand. Then it can cycle on and off like a normal compressor. And BTW, I don't think they would burn out even if you did run them continuously. Starting is probably the action that wears a motor the most, just letting it run doesn't harm it (much). Ariel. (talk) 07:18, 18 February 2011 (UTC)[reply]

so can i leave my fridge open for weeks?

Without burning out the motor (which will run continuously), probably yes. But since your household fridge (presumably) wasn't designed to work with the door open, it won't be able to achieve a normal fridge temperature, will consume a lot more electricity for which you will pay, and will likely ice up quickly and have to be defrosted. Stores have to balance increased power consumption (and larger and more capitally expensive motors) against the convenience of their customers not having to look through, and open and close, doors, which would reduce sales and revenue. Also, an open fridge markedly cools the air in its vicinity, which is acceptable for customers briefly visiting an area of a store, but would likely be uncomfortable in your home. 87.81.230.195 (talk) 08:13, 18 February 2011 (UTC)[reply]
... and many supermarket fridges have closed glass doors. The open ones are designed to reduce escape of cold air (unlike household fridges). Dbfirs 08:41, 18 February 2011 (UTC)[reply]
Note that leaving open the door of your kitchen refrigerator for weeks is a way of making the kitchen warmer not cooler, and is likely to spoil your yogurt, etc. (short for et cetera) Cuddlyable3 (talk) 10:11, 18 February 2011 (UTC)[reply]
True: I was unconsciously assuming extraction of the warmed air, which is unlikely to be installed on a household fridge. 87.81.230.195 (talk) 15:15, 18 February 2011 (UTC)[reply]
When I worked at a grocery store, back in my teenage years, the coolers with the open front were designed where cold air blew out a vent in the front and was aimed up and back. This seemed to make a kind of barrier that helped to prevent the cold air already in the cooler from escaping easily. Googlemeister (talk) 14:23, 18 February 2011 (UTC)[reply]
Yup...it's a type of air door design. DMacks (talk) 16:58, 18 February 2011 (UTC)[reply]

biochemistry

[edit]

the synthesis of ATP by photosynthetic system is termed as????

Please do your own homework.
Welcome to Wikipedia. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. -- P.S. Did you see our article on photosynthesis? ---- 174.21.250.120 (talk) 16:37, 18 February 2011 (UTC)[reply]

Blocking gravity?

[edit]

Is it impossible to block gravity or didn't we just discovered how to do it? If it's impossible, what does make it impossible? — Preceding unsigned comment added by 83.54.216.128 (talkcontribs)

  1. It is impossible.
  2. That's just the way it is.
Dauto (talk) 14:06, 18 February 2011 (UTC)[reply]
One of the reasons that it's impossible is that there is no such thing as negative (gravitational) mass. Technically speaking, you can't "block" the electrostatic force either. However, what you can do is counter the attractive/repulsive force with an equivalent amount of opposite charges. That's why there's no electrostatic attraction between the earth and the moon, despite the electrostatic force being many orders of magnitude stronger than the gravitational force. Protons equal electrons and the net charges are zero. The materials which "block" electrical fields do so by reacting to the electrical field, setting up their own, opposite direction field, which cancels out the original field. This they do by separating the positive and negative charges. There's no such thing as negative mass, so you can't counter gravity with an equivalent amount of it, nor can you set up an opposing field with positive/negative mass separation. -- 174.21.250.120 (talk) 16:08, 18 February 2011 (UTC)[reply]
The current understanding of gravity doesn't allow for it to be blocked. But, science fiction does because it can consider gravity to be transmitted by particles called gravitons. Because they are particles, you can block them. Then, gravity cannot be transmitted. The problem with this is that every object would emit gravitons. So, if you place a huge gravity shield between you and a planet to block gravity, you'd then be attracted to the huge shield. -- kainaw 14:07, 18 February 2011 (UTC)[reply]
Though one can imagine science fiction scenarios whereby the gravitons are deflected by means of something massless. And if the deflection device was of considerably lesser mass than the thing it was deflecting, that would still be pretty useful. But I'm not saying that any of this is really science. --Mr.98 (talk) 14:55, 18 February 2011 (UTC)[reply]
You might find antigravity and gravitational shielding a good place to start. Among other things, such devices might allow construction of perpetual motion devices—something that the Universe generally frowns upon. TenOfAllTrades(talk) 14:11, 18 February 2011 (UTC)[reply]
One possible form of dark energy (the one which I believe in) is something in galactic voids which exerts an anti-gravity force and thus both pushes galaxies together and also increases the rate of the expansion of the universe. (A different form of this theory has the dark energy constant everywhere, which still accounts for the accelerating expansion of the universe, but wouldn't force galaxies together as much.) If either form of the theory is correct, then it might, in the distant future, be possible to concentrate the source of this dark energy to create a strong anti-gravity field in a certain location, strong enough to cancel the gravity of a planet or star. StuRat (talk) 18:52, 18 February 2011 (UTC)[reply]
Particles aren't in my field (pun), but gluons are self-interacting strictly-attractive particles and thus, if produced freely, can block each other. The point is merely that there are, in accepted theory, ways to affect a field with certain properties similar to gravity, so not knowing really anything about the gravitational charge-carrier graviton is quite a hinderance to a definitive answer to your question. We are safe in saying, however, that nothing within what we currently know would do such a thing.
One important thing with speculative/science fiction on anything like antigravity, however, is to make sure it doesn't violate more fundamental laws of nature, particularly conservation of energy. When you start puncturing a bunch of holes and tubes in spacetime, we can talk about local violations of this law, but otherwise it has to be obeyed by the very definition of the universe itself. SamuelRiv (talk) 21:32, 19 February 2011 (UTC)[reply]
Samuel, I don't think your assertion that free gluons would be strictly attractive is correct. In fact I think (granted, without doing any of the math) that two gluons with identical color pattern would repel each other. Dauto (talk) 00:35, 20 February 2011 (UTC)[reply]
There's math involved? I thought particle physicists usually just bastardize group theory and then ask for billions of dollars because they claim it's "fundamental". ...I'm being a jerk, of course, and I defer to your point Dauto that I have no idea what a free gluon would look or behave like, other than what a prof once told me about making a "gluon lightsabre". SamuelRiv (talk) 09:11, 21 February 2011 (UTC) [reply]

Atom

[edit]

Are there any circumstances where the distance from nucleus to electron shell within the same atom changes? — Preceding unsigned comment added by 165.212.189.187 (talk)

1st: What does that question's got to do with gravity? 2nd: I don't think I fully understand your question. Can you elaborate alittle bit more? Dauto (talk) 15:46, 18 February 2011 (UTC)[reply]

1. I dont know yet. OK, Is the distance from the inner/outer (take your pick) electron shell to the nucleus constant under all possible conditions in which a particular atom can exist? or can it be different depending on certain circumstances? — Preceding unsigned comment added by 165.212.189.187 (talk)

If you place an atom in an electric field the electron will preferentially be found on one side of the nucleus (polarisation). I'm not sure whether that changes the average distance of the electron from the nucleus. --Wrongfilter (talk) 16:16, 18 February 2011 (UTC)[reply]
Also, if the atom is part of a molecule the electron may spread out into a molecular orbital which is larger than a atomic orbital. Dauto (talk) 16:22, 18 February 2011 (UTC)[reply]
First off, the concept of "distances" in atoms is a little undefined. Because of the quantum nature of subatomic particles, electrons are simultaneously infinitesimally close to the nucleus and infinitely far from the nucleus. The probability varies by radius and is vanishingly small at the extremes, so there's a bounded region where most of the probability occurs, but that region doesn't have a sharp boundary. (Where do you place the inclusion cutoff? 50% probability 90%? 95%?) That said, the shape and location of the probability distribution is effectively governed by the mass of the electron, the amount of energy it has, the number of electrons it's sharing its space with (cf. Pauli exclusion principle) and the electromagnetic forces acting on it. In the ground state and the absence of an external electromagnetic field, the electromagnetic forces are determined by the number of protons in the nucleus (which element it is), and the number of electrons. This means when you ionize an atom, you change the electron shells - not just the shell which the electron is added/removed from, but all the electron shells. The amount of energy is also important. If you excite an electron (say by it absorbing light), it changes its probability density. This is usually approximated by saying the electron jumps to a different electron shell, but there's a reorganization of all of the shapes and sizes of all of the electron clouds. If you're talking about atoms in molecules instead of isolated atoms, you throw in a host of other complications, as you're not only adding in extra electrons, you have "external" fields from the other atoms in the molecule. (Complicated to the extent that molecular orbitals (electron shells) are more than just a collection of "atom orbitals" and "bond orbitals".) - All that said, within any particular set of conditions, the electron shells should be of constant size and shape. That is, for example, a ground state, neutrally charged neon atom in the absence of any external electromagnetic field will have electron shells exactly the same as any other ground state, neutrally charged neon atom in the absence of any external electromagnetic field. -- 174.21.250.120 (talk) 16:36, 18 February 2011 (UTC)[reply]
How can an electron be "infinitely far" from anything, let alone the nucleus? Matt Deres (talk) 17:37, 18 February 2011 (UTC)[reply]
The "location" is described as a continuous probability function. Any finite limit you place on the distance would be an arbitrary cutoff on that function. The value of the function assymptotically may approach zero, but it doesn't actually permanently reach zero at a certain definite distance out, so again you can't say you will only consider it to be defined up to that certain distance. Now practically speaking, it might be more useful to say "arbitrarily" (since we are using the fiction of electrons having definite position) instead of "infinitely". DMacks (talk) 17:50, 18 February 2011 (UTC)[reply]
I understand that you can't know the position of an electron, but I thought the continuous probability function was for the location "on" the specific shell configuration, or at least that the outermost point of the outer shell was the limit of distance that it could be from the nucleus. Are you saying that the outermost point that an electron can be from the nucleus is infinity? — Preceding unsigned comment added by 165.212.189.187 (talk)
Yes, that is the mathematical result when you compute the probablity distribution for an isolated atom, in a Universe which contains just this one atom. This probablity distribution will have to be modified at distances where the electron feels the influence of another atom. Even in the idealised case, the probability drops very quickly, so even if it is finite even at a kilometer from the nucleus, it is very small indeed. --Wrongfilter (talk) 21:50, 18 February 2011 (UTC)[reply]
In relativistic quantum mechanics I think there is a limit given by the time scale of the atom's interaction with other atoms (times the speed of light). -- BenRG (talk) 23:35, 18 February 2011 (UTC)[reply]
Yes, in a Magnetar the shape of the atom is squeezed into a cylinder. Ariel. (talk) 22:27, 18 February 2011 (UTC)[reply]

You can also "encourage" the electron in a given valence shell to be at a greater distance (higher energy) from the atom without changing the quantum state. This happens in intermolecular interactions all the time - see London dispersion forces. SamuelRiv (talk) 21:38, 19 February 2011 (UTC)[reply]

What force is keeping the electron that is a kilometer away from the nucleus associated with that atom?

Cost of Driving

[edit]

I'm trying to figure out the marginal cost of driving a car. It seems the total cost is a function of both mileage and time. Some costs are obviously related to the miles driven, such as gas, and others are obviously independent of mileage, such as the insurance costs. The cost that has me stumped is depreciation. The car loses value both as a function of time (the car will lose value over time whether or not you drive it) and mileage (two cars of identical age but with different mileage are worth different amounts). Does anyone know how to quantify this? anonymous6494 14:53, 18 February 2011 (UTC)[reply]

It's also dependent on how popular the particular model is on the second-hand market, which is to some extent influenced arbitrarily by fashion and therefore fluctuates over time - some models become more valuable with age, assuming no unusual deterioration in condition. In the UK, at least, cheap guidebook-type magazines are widely published and frequently updated listing averages of current values, based on recent sales, and there are on-line equivalents such as this.
Alternatively, an accountant valuing the depreciation of a company-owned car might just assign an arbitrary percentage-of-the-original-cost loss per year, such that in x years the value on the company's books will decline to zero, but as that linked article (which actually uses a vehicle as an example) details, there are alternative methods. 87.81.230.195 (talk) 15:12, 18 February 2011 (UTC)[reply]
I had looked at Kelley Blue Book's website to try various combinations of age and mileage but the resolution was not very good (a large change in mileage was needed to produce a change in value). The site you linked was similar but my car isn't listed so it may not be available in the UK (it may be there under a different model name). From an accounting perspective US tax law makes it clear how to depreciate a motor vehicle used for business, but depreciation doesn't accurately depict the actual value, only its value for tax purposes. I was hoping to determine how the actual value is affected by driving additional miles. anonymous6494 15:30, 18 February 2011 (UTC)[reply]
Glass's Guide gives the mileage adjustments for the retail and trade values of each model and age of car in the UK. The figure depends on the model and age of the vehicle. I don't have access to a copy to cite any figures, but last time I checked for my old car it was around 5p per mile. It will probable be double that for current models. Dbfirs 17:41, 18 February 2011 (UTC)[reply]
Note that your example of a cost which is independent of mileage isn't quite right. Insurance companies often ask how many miles you drive per year and figure that into the premium price. Also, if you drive more, an accident is more likely, and if this leads to a claim then your rates will go up again. Finally, if you drive more, tickets are more likely, again increasing your premiums. StuRat (talk) 18:39, 18 February 2011 (UTC)[reply]
Yes, good point. If you get quotes for different anticipated mileages, you can split insurance into fixed and variable elements, though I expect that most insurance companies will use a step function rather than continuously variable premium. Dbfirs 23:45, 18 February 2011 (UTC)[reply]
There are some basic rules in behavioral economics that, all variables regarding car make, model, condition, etc being equal, might give you some guidance on how much a car depreciates in people's heads. For example, let's quantify the idea of a new car losing half its value as soon as its driven out of the lot. This might seem an irrational cutoff, since the car's been test-driven plenty, often by people who don't know how to use the clutch properly. However, given the choice between a guy offering a discount of $100 on a $10k car (1%) after driving it for a week after buying it and deciding he didn't like it, versus buying a car "new" with no discount, even if the mileage were the same, most people would buy the "new" car. Why? Because it's "new", and 1% isn't a lot to spend (versus, say, for a "new" $300 TV versus a one-week used $200 TV, most would buy the used one at a 33% discount, even though the proper discount and marginal utility are about the same between a car and a TV). My suggestion is, if doing this informally, come up with a list of survey questions, such as "The car is new on the lot, but the same car with same mileage used for one week is for sale at __% off, which do you buy?", and refine the price as you ask more people - you probably only need to ask 5 or 8 people before getting a pretty good idea of how a car depreciates over each year for 10 years, as long as you don't ask them about multiple years in succession. And of course, your best source on behavior is yourself, as long as you answer yourself honestly. SamuelRiv (talk) 21:49, 19 February 2011 (UTC)[reply]
There are good reasons to avoid slightly used products. If the person who bought it quickly changed his mind, that implies that there was either something wrong with the car to begin with, or it was damaged in some way, either of which could be cost far more than $100 to fix. If, on the other hand, he decided to sell it after a few years, that might just be because he wants a new car, and doesn't imply that there's anything wrong with it. StuRat (talk) 00:20, 21 February 2011 (UTC)[reply]

Food spoilage - refreezing meat

[edit]

One of the mantras of food safety is (so far as I'm aware) to never re-freeze meat products, presumably because the bacterial load increases dramatically during the second thawing cycle. Is there something intrinsic to the freeze-thaw cycle that favours undue bacterial proliferation? If I take any given two pounds of ground beef, freeze and thaw one of them twice over the course of two days, and just leave the other one in the fridge for two days, will there be any appreciable difference in the multiplier of bacteria count? I'm aware of the need for proper cooking, proper handling to avoid cross-contamination, &c, but can't find any biological basis for this particular "rule" of food in our articles. (FD: this pertains to a real-world discussion, but the discussion is with a medical doctor, so at least on this end any suggestion of medadvice would be met with peals of laughter - I'm asking about only the biology here) Franamax (talk) 16:01, 18 February 2011 (UTC)[reply]

I've honestly never heard of this particular "rule." The Food Safety and Inspection Service is the king of meat food safety here in the US and A, and their advice page on freezing doesn't say a thing about it. Even listeria doesn't grow at freezing temperatures since the water activity of ice is... marginal. Foods that are handled a lot (i.e. if you take a little bit off a hunk of ground beef six or seven times) are at greater risk just because every encounter adds a possibility of contamination, but washing your hands and other common sense stuff makes that risk marginal. To be honest, cooking it fully is the big answer to most bacterial issues, though S. aureus toxin is of course going to stick around even if the bug itself is gone. The quality of thawed and re-frozen meat might be... undesirable, of course. SDY (talk) 16:16, 18 February 2011 (UTC)[reply]
The American FSIS say on their website that it's ok.[3] But the Food and Nutrition Service say it's not.[4] The UK's Food Standards Agency says you can refreeze meat once cooked, not raw.[5] In the EU, the statement "Do not refreeze after defrosting" is mandated on all quick-frozen goods under Council Directive 89/108/EEC. 17:04, 18 February 2011 (UTC)
And The Straight Dope has a quick overview. Nanonic (talk) 17:12, 18 February 2011 (UTC)[reply]
All frozen food in the UK carries this instruction not to refreeze. Its serves as simplified guidance to hoi polloi who may not understand or follow more complex advice. Considering how dangerous food poisoning is, it seems a sensible approach. Much of the fresh unfrozen meat and seafood that you buy in supermarkets was stored in deep freezers, yet it has been thawed and often labelled as being suitable for freezing on the day of purchase. Chefs etc., on the other-hand, are schooled in the health and safety aspects of food storage and should know when its safe to refreeze. Their fridges and freezers must also have externally visible temp gauges and have fans to speed up cooling.--Aspro (talk) 17:19, 18 February 2011 (UTC)[reply]
It's not just the time of the freezing process, it's the total time of the meat spent at temperatures that are compatible with bacterial growth. If the meat spends four hours at susceptible temperature thawing, then two refreezing, then four hours thawing again, that's sort of like leaving it out for ten hours, and you may end with so much pathogen that even if cooking kills 99.9% of it you still have enough for an infectious dose. The "don't refreeze" isn't ridiculous in the context of cumulative time, but "don't leave it sitting at temperatures that grow bacteria" is really what they're talking about. SDY (talk) 17:55, 18 February 2011 (UTC)[reply]
I think the problem may not be with the bacteria themselves, but our perceptions and memories. Consider three paths frozen meat can take:
A) You buy it frozen, thaw it, leave it in the fridge too long, and you get worried about it going bad soon, so you cook it and eat it. Perhaps you have a light exposure to bacterial toxins.
B) You buy it frozen, thaw it, leave it in the fridge too long, and you get worried about it going bad soon, so you toss it. No exposure to bacterial toxins.
C) You buy it frozen, thaw it, leave it in the fridge too long, and you get worried about it going bad soon, so you refreeze it to stop the bacterial growth. It then stays in the freezer for a year, and you forget that it was questionable when you refroze it. You now take it out, thaw it, and leave it in the fridge several more days, and it goes bad before you cook it and eat it. Heavy exposure to bacterial toxins.
This could be addressed by writing the history of how long the meat has been stored at various temps on a label on the meat, and doing some calculations to see if it's still safe, but most people wouldn't do that (perhaps a butcher dealing with sides of beef might). Another factor is if the meat is defrosted in warm water, which is a bacterium's dream. Doing this twice is far worse than once (if the bacteria grows to 10x the original count once, it would grow to 100x if this is done twice).
Perhaps in the future each slab of meat will come with a device that can tell you whether it's safe or not, by, say, changing the color on an indicator strip that darkens with time and temperature, paralleling the growth of bacteria. This could also be done with a reusable digital thermometer and clock combo. It would ideally be reset and then kept with the meat from the time of the slaughter until cooked and served. I picture them being returned to the butcher/store for refund at the next visit, and then returned to the slaughterhouse, sterilized, reset and reused. Somewhat less effective would be if the consumer took his own device with him to the store, put it with the meat as he put it in the basket, and reset the device then. StuRat (talk) 18:09, 18 February 2011 (UTC)[reply]
Thanks all for the info and the links. I read them and they seem to repeat the "mantra" without substantiation. However, I may have come up with the answer on my oen. When cells are frozen in uncontrolled situations, they have a tendency to lyse, I believe during tha freezing process but only made evient upon thawing. This is the difference between selecting only those bacteria competent to penetrate cell membranes and putting a come-one-come-all sign out on the street, isn't it? Is there a reliable source to confirm that? Franamax (talk) 18:35, 18 February 2011 (UTC)[reply]
For food safety, bacterial growth is primarily about four things: nutrients, water, time and temperature (there are other potential complications like pH, but they're not a factor with meat). If the lysed cells provide better nutrition for the bacteria (plausible) or more available water to support growth that might help, but the non-refreezing argument appears to be mostly targeted towards time and temperature. SDY (talk) 18:56, 18 February 2011 (UTC)[reply]
It's going to be difficult to find a reliable source that will tell you its okay to refreeze meat. As you've no doubt gathered from the responses above, there is at least a non-zero risk every time product is brought to room temperature (i.e. the danger zone). If "common wisdom" says that you should only freeze meat once, any source advocating multiple freezings would leave themselves legally/morally liable for any misadventure that occurred as a result - and for what? I can't think of a particular person or group that would have anything to gain from people refreezing their goods (other than maybe the manufacturers of freezer bags). And when you factor in how poorly most people understand the vectors of food-borne illness plus the ill-conceived methods many people employ to defrost food (e.g. leave it on the table all day for supper that night), you start to think that giving folks the dumbed-down (but safer) advice isn't such a bad thing. Matt Deres (talk) 19:20, 18 February 2011 (UTC)[reply]
I wasn't aware that avoiding refreezing is a safety matter. I thought the point is that when you freeze and thaw meat multiple times, the texture turns more and more mushy. Looie496 (talk) 20:45, 18 February 2011 (UTC)[reply]
Well, it's a safety matter in the sense that bacterial growth is greatest when food is in the danger zone. When the food is sitting in your freezer, bacterial growth is greatly restricted due to the water being unavailable for use. When the food is sitting in your oven, the internal temperature of the food is (hopefully) being raised to the point where the bacteria are killed off. But room temperature is problematic and that's the temperature many people defrost their meat at. For example, when I was a kid, my mom thought nothing of leaving a package of ground beef on the counter for several hours so she could use it for supper. Likewise with cuts of chicken, pork, and so on. Do that once and you might get away with it, but each time you do it, you roll those dice again. For the record, the best way to defrost a chunk of meat is to immerse it in cold water that is kept in motion, for example by putting it in a bowl filled with water and with the tap set to drizzle cold water in, keeping a current moving. Even big chunks of roast defrost quickly - and without the partially cooked bits you get from using a microwave.
No question that the texture gets worse - and not just with meats. Stuff like strawberries will literally turn to mush after just one or two re-freezings. Matt Deres (talk) 21:27, 18 February 2011 (UTC)[reply]
It sounds like your mother may have known a lot about how to avoid food poisoning. Before refrigeration or ice boxes came into being, people knew what was good and bad practice. Even today, in places like Africa, one can see raw red meat covered black with flies and on the point of going putrid, but cooked -its OK. Chicken there however, is taken home still flapping and vocally protesting its innocence. Also, I have noticed that in recent years red meat in our own supermarkets is not so well bled as it used to be. This shortens the time it can remain safely uncooked. Well bled beef tastes much better if it is matured for a few weeks before cooking. The Inuit actually eat putrid meat as a delicacy... but I will leave that until another day. --Aspro (talk) 22:07, 18 February 2011 (UTC)[reply]
Something that hasn't been mentioned yet is that the bacterial toxins have more time to build up with repeated thawing and refreezing, and they are not removed by cooking. 92.29.119.194 (talk) 00:19, 19 February 2011 (UTC)[reply]
That might be because some toxins (most) are heat labile.--Aspro (talk) 00:27, 19 February 2011 (UTC)[reply]
I hinted at this above, actually. S. aureus toxin is quite stable at normal cooking temperatures. As toxins go, it's not too bad in that it won't kill you, I don't think anyone would enjoy recreational use. SDY (talk) 00:31, 19 February 2011 (UTC)[reply]

Do other apes have problems giving birth?

[edit]

Does any other species have so much trouble? 66.108.223.179 (talk) 16:03, 18 February 2011 (UTC)[reply]

I don't think so. It's how our legs are positioned to allow us to walk upright full-time that causes the problem. Other apes have legs positioned more to the side, providing more room for child-birth (ape-birth ?).StuRat (talk) 17:47, 18 February 2011 (UTC)[reply]
Obstetrical Dilemma is relevant but a little vague about when exactly humans' ancestors developed bipedalism. Comet Tuttle (talk) 18:09, 18 February 2011 (UTC)[reply]
My understanding is that the most important factor is the size of our heads at birth. Other apes have much smaller brains relative to body size, and correspondingly smaller heads. Looie496 (talk) 18:59, 18 February 2011 (UTC)[reply]
A detail: Exactly why the human (female) hip area isn't wider. It's to do with walking upright, but the exact reason, as far as I've gathered, is that a wider hip increases the risk of tearing the leg muscle. EverGreg (talk) 20:08, 18 February 2011 (UTC)[reply]
I suspect that it has to do with speed and efficiency when running. Girls can run as fast as boys, until puberty hits and their hips widen. Then the swinging of the hips from side to side seems to slow them down considerably. If you look at female Olympic runners, they tend to have rather narrow hips. StuRat (talk) 21:45, 20 February 2011 (UTC)[reply]
It was confirmed a few days ago that Lucy's species walked upright as humans do 3 million years ago. 66.108.223.179 (talk) 05:28, 19 February 2011 (UTC)[reply]
Then I suspect that they probably did have more trouble giving birth than most other apes, although perhaps not as much as modern humans, if their heads were smaller, at birth, relative to the size of adult females, than ours. StuRat (talk) 21:42, 20 February 2011 (UTC)[reply]

Virtual black box

[edit]

Aircraft black boxes provide valuable crash info, but can't always be recovered. Has anyone considered the option of a virtual black box ? It would work like this:

1) During operation, airplanes would broadcast their current black box info, at a designated frequency, to the nearest tower. This signal would include identification info for the flight and airplane.

2) A computer at the tower would then store this info.

3) We could stop here, and have investigators contact the various towers near the flight path to retrieve the info after a crash. Or, the towers could report the info, in turn, via the Internet, to a central site where the records for each flight are accumulated and available for real-time analysis on all flights. This could be useful, say, to identify a systemic problem like wind sheer or multiple hijackings, while the info could still be used to prevent further problems with other flights.

I envision this system being in addition to the current black boxes. So, has anyone proposed this ? StuRat (talk) 18:27, 18 February 2011 (UTC)[reply]

Googling for "virtual black box" aircraft finds it's idea that's been around for quite some time. DMacks (talk) 18:32, 18 February 2011 (UTC)[reply]
And has it gotten any traction ? StuRat (talk) 18:34, 18 February 2011 (UTC)[reply]
(ecx2) Here's a proposal that was published in IEEE Spectrum. You'd still need the black-box flight data recorder to deal with transoceanic flights, over-the-Amazon flights, etc. Comet Tuttle (talk) 18:35, 18 February 2011 (UTC)[reply]
Aircraft Communications Addressing and Reporting System can be used to download some aircraft data to ground stations, with the dissapearance of Air France Flight 447 it has provided some data to the investigation as the aircraft and the aircraft data recorders have not been found. MilborneOne (talk) 18:43, 18 February 2011 (UTC)[reply]

Commercial airplane satellite navigation

[edit]

There is the problem that each tower can only track airplanes within a limited range, due to their radar being blocked by the curvature of the Earth. For this reason, I believe the US military has gone with satellite navigation, so the position of each plane can be tracked by satellite and reported back to the various landing fields. Is this correct ? Are there plans to do this for commercial flights, as well ? StuRat (talk) 18:33, 18 February 2011 (UTC)[reply]

Unless the military changed drastically in the last 20 years, there is a hell of a lot of ground-based radar data being used. My job was radar controller maintenance. The curvature of the Earth is handled by elevation. For example, I had to go to Norway (in February) to work on a radar positioned near the top of a mountain. Sure - a satellite may be able to monitor traffic around northern Norway, but having ground-based radar with a satellite uplink works better. -- kainaw 18:45, 18 February 2011 (UTC)[reply]
I believe the military use of satellites has changed drastically in the last 20 years, and does combine radar tower info with satellite info, currently. My question is if commercial aviation has also started to use satellites in this way. StuRat (talk) 19:08, 18 February 2011 (UTC)[reply]
(ec) The Tower, which I presume you mean the local air traffic control at an airfield, they are only interested in aircraft they can see out of the window and within ten or twenty miles from an airfield so radar being blocked by curvature is not normally a problem. The air traffic control centres need a bigger picture of what is going on and they can are normally be fed with radar images from different places some may be hundreds of miles from where the operator is stationed, think of it like an internet of radar images linked to different ATC towers and control centres. As to using satellites - have a read of Automatic dependent surveillance-broadcast not so much as the satellites tracking the aircraft they just use a system like the GPS/Sat Nav in you car and broadcast the position by radio. MilborneOne (talk) 18:55, 18 February 2011 (UTC)[reply]
But how do the air traffic control centers get their data ? It can't be just the sum of all radar returns, since they won't work on flights across oceans. I suspect that they rely on info broadcast from the airplanes, but that could be missing or unreliable, in the case of an electrical failure or intentional incorrect transmissions, say from a terrorist-controlled airplane. StuRat (talk) 19:04, 18 February 2011 (UTC)[reply]
My understanding is that oceanic flights are not tracked by ATC in real time. They are assigned a route when they leave one continent's coverage area, and are supposed to stick to it as best they can, or deport deviations by HF radio. (See procedural control]). TCAS still works to prevent mid-air collisions over the oceans; it does not depend on ground support (but uses some of the same onboard equipment as ATC uses for radar tracking). –Henning Makholm (talk) 16:08, 19 February 2011 (UTC)[reply]
Most civilian aircraft in the United States (and much of the rest of the world) are tracked using Airport Surveillance RADAR. The newest model is ASR-11 and most civilian units are commercially sold by Raytheon Surveillance Systems, and then operated by local air traffic controllers and airports under contract to the FAA. The system works pretty well, and coverage is pretty dense, and the technology fits the organizational model currently in use to manage air traffic (which is as much a procedural challenge as a technology problem). Converting to a satellite-based system is surely possible - it's just expensive and unnecessary. The FAA's current "next big thing" for tracking civil air traffic is digital ASR: see this FAA press-release website. (After edit-conflict): civil RADARs are far more vulnerable to "spoofing" than military systems; they rely on squawk codes that aircraft voluntarily reports for such important data as elevation and aircraft status (plus "unimportant" metadata such as airline and flight number). In any case, this is not considered a serious threat to air defense, or else civil RADARs would be replaced by military-capable RADARs that have electronic countermeasures to make squawk spoofing and airborne electronic RADAR evasion more difficult. Finally, I refer you to Lincoln Laboratory Air Traffic Control Systems, an FFRDC funded by the U.S. Department of Defense and the FAA; this "think-tank" of aviation, electronics, and policy experts consider all of the sorts of issues that you are bringing up, and evaluate strategic technology and policy requirements for the national air traffic control network. In the same way that we have an article on everything, the Feds have an agency for everything. Nimur (talk) 19:16, 18 February 2011 (UTC)[reply]
That was recently on the news. Dauto (talk) 02:37, 19 February 2011 (UTC)[reply]

Mental illness and responsibility of other for own thoughts and feeling

[edit]

Is the exaggerated and persistent belief that others are responsibility for your own thoughts and feeling a common component of some mental illnesses? — Preceding unsigned comment added by Wikiweek (talkcontribs)

I will provide almost the exact same answer as I provided to your earlier question. From one single "possible symptom", it is not possible for us, or even a trained professional, to make a judgement call about whether a person has a mental health issue. Correctly interpreting the results of a psychological screening or therapy interview is very difficult. That is why a psychiatrist is a trained medical doctor who has undergone several years of schooling, technical training, apprenticeship, and residency. A psychiatrist is able to meaningfully interpret the entirety of a person's circumstance and situation, not just the response to a single question. "Short questionnaires" should not be considered conclusive in any way; at best, they may help guide a trained professional by providing a wide set of indicators; but they are not a substitute for professional diagnosis. The reference desk will not provide medical advice, including psychiatric advice; as part of this, we will not be able to provide a concrete answer to your question, because it would constitute a diagnosis and we will not perform psychiatric diagnosis here. You can read our articles on mental health, and draw your own conclusions; and if you need assistance with diagnosis, you should seek a trained and licensed professional. Nimur (talk) 20:56, 18 February 2011 (UTC)[reply]
We can't diagnose you here, nor can we offer you any form of treatment. However, that sounds similar to "institutional think"....Then again, mental illness itself may be a delusion per Thomas Szasz's Ideology and Insanity: Essays on the Psychiatric Dehumanization of Man.Smallman12q (talk) 21:41, 18 February 2011 (UTC)[reply]
Actually there is a pretty straightforward answer. The belief that thoughts and sensations are being inserted into your head by external entities is a common feature of paranoid schizophrenia. Note that this doesn't just mean holding others responsible, it means believing that some entity is physically broadcasting thoughts or voices into your head. Merely blaming others for one's problems is not particularly meaningful. (Note: I don't see the question as asking for a diagnosis.) Looie496 (talk) 22:25, 18 February 2011 (UTC)[reply]

Tappan zee bridge cost

[edit]

In this WSJ article on the Tappan Zee Bridge, it states that the bridge cost $640 million to build in today's dollars in the 1950's. The article also states "And the state has a team of financiers scrambling to find the $8.3 billion needed to replace it as a car-only structure without adding bus lanes or a train line and more than $16 billion with them."

Why would it cost more than 10x to build replace the bridge today than it did in 1950 (or did the WSJ do its math wrong?)Smallman12q (talk) 21:47, 18 February 2011 (UTC)[reply]

The cost of building materials and the cost of laborers has gone up faster than inflation. Converting to "today's dollars" adjusts for inflation, but inflation tracks money supply relative to number of people. The cost to build something does not necessarily match. Another expense is dealing with rerouting all the traffic and people in the area of the construction. Especially in NY this could be a significant expense. Ariel. (talk) 22:33, 18 February 2011 (UTC)[reply]
It's likely the new bridge would be designed to carry more traffic than the old bridge; a bigger bridge costs more money. And while I'm speculating, it may also be the case that a new bridge would need to be built to stricter standards than the old bridge was. —Bkell (talk) 00:54, 19 February 2011 (UTC)[reply]
Adding weight to that interpretation is a passage from our article: "it was constructed during material shortages during the Korean War and designed to last only 50 years.... The collapse of the I-35W Mississippi River bridge in Minnesota on August 1, 2007 has renewed concerns about the bridge's structural integrity." TenOfAllTrades(talk) 14:29, 19 February 2011 (UTC)[reply]
In addition to the original very lightweight design I noted above, the proposed replacement has a significantly expanded deck. The original bridge has seven lanes, and connects to at least eight lanes of highway at each end. The further lack of shoulders means that access for emergency vehicles can be an issue, and that any traffic problems rapidly snarl up flow along the entire bridge. The proposed design has ten lanes of automobile traffic, plus shoulders, plus dedicated space for pedestrian and bicycle traffic ([6]). TenOfAllTrades(talk) 14:48, 19 February 2011 (UTC)[reply]
I don't know if it could be done here, but one cost saving approach to expand bridge capacity is to leave the original in place (and continue to do maintenance on it), while building another, parallel to it. One bridge is often used for traffic headed in one direction, with the other used for the other direction. Some advantages:
1) Minimal disruption of traffic during construction of the 2nd bridge. The case where the old bridge is kept intact until the new one is available, then demolished, similarly causes minimal traffic disruption. The case where the original must first be demolished so that a replacement can be built in the same location, however, can cause massive traffic disruption for years.
2) Once completed, the ability to route all traffic over a single bridge, during maintenance, or when accidents make one impassable.
3) If this was a toll bridge, it may be possible to charge double tolls in one direction, and thus eliminate the need to construct toll booths on the new bridge or bridge approaches. StuRat (talk) 21:28, 20 February 2011 (UTC)[reply]

Solar mass loss and orbits

[edit]

How did scientists estimate Sun to last for about 4.5 billion years? The far I know is a loss of about 1% of solar mass through radiation which if calculated linearly should result in about 79 billion years given the current hydrogen and helium figures. On the other hand, what was the expected Earth's orbit when it began to form?--Email4mobile (talk) 22:10, 18 February 2011 (UTC)[reply]

It's not linear in the slightest (over the entire life). See Stellar evolution for some discussions, but no numbers that I could see. In the current stage (of our sun) it's linear, but it eventually reaches an exhaustion point after which it changes dramatically. Even though it has tons of mass left it's not able to fuse it like it did before. And finally even if it was linear not all the mass gets converted to energy, only the mass deficit between hydrogen and helium gets converted - but the mass of the helium stays (ignoring the fact that it will fuse helium too). Rerun your calculations using the mass deficit instead of the total mass and see what you get. Ariel. (talk) 22:39, 18 February 2011 (UTC)[reply]
A not-too-technical overview is provided here, at "Ask An Astronomer" from Cornell University. The lifetime of the sun is "estimated by assuming that the sun will "die" when it runs out of energy to keep it shining." The amount of available energy is estimated from knowledge of nuclear fusion; and the "energy consumption rate" is estimated from observations of the sun's energy output (how "brightly" it is shining). Nimur (talk) 22:52, 18 February 2011 (UTC)[reply]

Note though that the dating of the origin of the Sun doesn't rely on these things. The most precise figures are based on measurements of decay of radioactive elements in meteorites, which almost uniformly date to 4.5-4.6 billion years old, and which are believed on the basis of theoretical considerations to have formed within the first few million years of the solar system. Looie496 (talk) 23:53, 18 February 2011 (UTC)[reply]

The 79 billion figure likely assumes all the hydrogen will eventually get burned into helium. That assumption is incorrect. Only the hydrogen at the core of the sun will be burned. Dauto (talk) 00:58, 19 February 2011 (UTC)[reply]
And during those 4.6bn years, the Sun will continue burning more-or-less the same as it does now. One quite-interesting problem of the late-19th century: by this time, Darwin was roughly accepted by the scientific community and gradualist geology suggested the Earth was 4bn years old. However, astrophysicists disagreed in theory, and came up with a maximum age of the Earth at 100,000 years. This is by no means Young-Earth Creationism, but the controversy was not without its theological side. The problem was that until the 1920s, everything in the universe was believed to revolve around the brilliant science of statistical mechanics, which can be derived entirely from a priori principles - that is, you don't even need to know what warmth looks like to know what temperature is. So by an ordinary thermodynamic engine with the size and temperature and fuel mass of the Sun, it could not even last a million years before burning out by even 100%-efficient engines. Then we discovered radioactive decay, the quantum mysteries of matter, and the nature of the atomic nucleus, and suddenly we had an entirely new engine in the universe: the nuclear forces. (There's a quote, probably by someone on the Bethe-Critchfield Nobel team for making fusion, when he's out with a companion at night: She said "Look at how pretty the stars shine!" He said "Yes, and right now I am the only man in the world who knows why they shine." According to the story, this was not enough to get her into bed. Amateur.) SamuelRiv (talk) 22:03, 19 February 2011 (UTC)[reply]

So, where can I get a reliable source for this kind of calculations or estimations? (I can see there is no interest in the Earth's orbit problem ;) ).--Email4mobile (talk) 18:21, 20 February 2011 (UTC)[reply]

Effects to an Earth Human upon entering a parallel space-time continuum via a higher space-time continuum.

[edit]

If an Earth Human would step through an inter-dimensional portal where the new space-time continuum was controlled by a Time "arrow" (as posited by the late Dr. Hawking, in his most famous book) that was pointing in the reverse direction to the Earth's time arrow, would the Earth Human immediately meld with the new time arrow, atom per atom, so to speak; in which case he would continue to age, even when he returned, he would return more aged, but to an earlier Earth time period?

K. McIntire-Tregoning, concert composer, February 18, 2011

(NOTE: copyrights suspended for this transmission with Wikipedia.) 189.173.210.245 (talk) 23:25, 18 February 2011 (UTC)[reply]

Your question has no meaningful physics-based answer, because "inter-dimensional portal" and "new space-time continuum" do not have well-formed, meaningful, physics-based interpretations. Sorry. Nimur (talk) 23:29, 18 February 2011 (UTC)[reply]
Mind me asking who is that late Dr Hawking? Stephen Hawking is still alive. Dauto (talk) 02:29, 19 February 2011 (UTC)[reply]
And Stephen Hawking is normally referred to as Prof. Hawking, not Dr. Hawking. --Tango (talk) 18:21, 19 February 2011 (UTC)[reply]
Anyways, any, any, ANY so called "time machine", i.e. a device/portal/spacetime construction/anything which takes you backwards in time will be destroyed by vacuum fluctuations within 10-43 seconds (The Planck-Wheeler time). So no, even if the question had some meaning, it wouldn't work. ManishEarthTalkStalk 10:22, 19 February 2011 (UTC)[reply]
Could you provide some reference for that? Problems with time travel are usually described in terms of the Einstein Field Equations in General Relavity (and any solutions thereto requiring regions with negative energy density, which seem to be impossible). I've never heard of quantum mechanics and the Planck time being involved. --Tango (talk) 18:21, 19 February 2011 (UTC)[reply]
If doesn't really make sense to talk about the arrow of time being in a different direction in one universe than in another. How would you compare them? Discussions about the arrow of time usually revolve around making sense of the relationships between different arrows of time within one universe: 1) We remember the past but we don't remember the future (the psychological arrow of time), 2) the universe is denser in the past than the future (the cosmological arrow of time) and 3) entropy (disorder) is lower in the past than in the future (the thermodynamic arrow of time). Some people describe other arrows of time, but they can usually be shown very easy to be equivalent to one of those three. What Stephen Hawking worked on (among many other things) was trying to work out whether it is concidence that, for example, we remember times of lower entropy and not ones of higher entropy or if the universe has to work that way (in this case, he realised that the act of storing memories in the brain by necessity increases entropy, so the arrows have to point in the relative directions that they do). Trying to compare arrows of time between universes makes no sense. The entire concept of different universes is difficult enough to make sense of. --Tango (talk) 18:21, 19 February 2011 (UTC)[reply]