Jump to content

Wikipedia:Reference desk/Archives/Science/2010 June 28

From Wikipedia, the free encyclopedia
Science desk
< June 27 << May | June | Jul >> June 29 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 28

[edit]

Listed ingredients

[edit]

So I was drinking my Lipton Lemon Ice Tea (LLIT), and thinking, "damn, this tastes like bad water", and I realized (perhaps a bit late) that drinks, obviously containing huge amounts of water, rarely (never?) actually list water as part of their ingredients, at least where I live.

Is there any reason companies would wish to avoid printing water on their ingredient labels (other than the fact that it can always be assumed, and thus is just a waste of printing space)?
Why are the relative amounts of individual ingredients on packaging never (properly) listed? Is this a "printing space" issue as well? i.e. listing them in order weight eliminates a lot of information that may not be needed. I would have liked to know if LLIT was only 95% water, or like 99.9% water, mind you.
For the consumer, it seems that the more comprehensive the ingredients are, the better. Is there any other reason why companies seem only to publicize the absolute legal minimum when it comes to the ingredients of consumables? Thanks in advance. 210.165.30.169 (talk) 01:03, 28 June 2010 (UTC)[reply]
In the UK, drinks that contain added water do include water in their list of ingredients. --Tango (talk) 01:37, 28 June 2010 (UTC)[reply]
In the US, if you look at, say, a bottle of Diet Coke, the first ingredient is listed as "Carbonated filtered water", so I am not familiar with any type of product that is allowed to omit water from the ingredients list. As for why companies only do the legal minimum, one possible reason is that from the company's point of view, their recipe may be considered a crucial trade secret and they don't want to tell all their competitors how to clone their product. Comet Tuttle (talk) 02:50, 28 June 2010 (UTC)[reply]
Yeah I see labels that say "Flavor" or "Seasoning" instead of listing the ingredients. --Chemicalinterest (talk) 10:50, 28 June 2010 (UTC)[reply]
Also in the US, and I am under the impression that very few products put water in the label. I certainly only see it rarely. Falconusp t c 04:58, 28 June 2010 (UTC)[reply]
Can you give an example of a product that contains added water (not just water contained in another ingredient) and doesn't include it in the list of ingredients? --Tango (talk) 15:53, 28 June 2010 (UTC)[reply]
Sometimes they put "aqua" instead of the word water, and talking about "bad" those tea leaves were probably decomposed slightly before making up the drink. Graeme Bartlett (talk) 05:16, 28 June 2010 (UTC)[reply]
As for why companies are not interested in detailed ingredient lists, it's in part because of trade secrets, as well as the fact that most of them don't get a lot out of telling the consumer that a bunch of scary-sounding (but probably innocuous) chemicals are added to their food in order to give it the flavor, texture, and color they desire. When questions of labeling have come up—say, in regards to BGH—the companies not in favor of labeling usually say, "hey, the FDA says it's safe—otherwise we couldn't sell it—and so why do we need to rile people up about something that won't hurt them?" I find it a pretty uncompelling argument—"people are too dumb to understand ingredients lists, so let's just keep them in the dark on it"—but there you have it. --Mr.98 (talk) 16:19, 28 June 2010 (UTC)[reply]
Well, when you consider how many people get freaked out over dihydrogen monoxide in consumer products, it's pretty clear that many people are too dumb to understand ingredient lists and thus are better off kept in the dark on it. FWiW 67.170.215.166 (talk) 07:47, 30 June 2010 (UTC)[reply]

Sigma additivity in quantum mechanics

[edit]

Is sigma additivity true for probabilities in quantum mechanics? Sigma additivity is one of the axioms of probability, what I would like to know is if it is an arbitrarily chosen axiom or if there is any scientific basis behind it (and the only application of probability in its purest form in science is in quantum mechanics). Also, do the other axioms of probability hold in quantum mechanics? ––115.178.29.142 (talk) 03:21, 28 June 2010 (UTC)[reply]

Yes, the probabilities in quantum mechanics are probabilities. If the definition of probability didn't apply, it wouldn't be called a probability. (I'm having difficulty seeing the motivation for this question.) Looie496 (talk) 04:25, 28 June 2010 (UTC)[reply]
I'm afraid I think that answer is a little reductive. Sigma-additivity is part of the mathematical theory of probability. Whether it has any direct physical counterpart is another matter entirely.
To elaborate: Sigma-additivity says that if you have a countably infinite (or, less interestingly, finite) number of mutually exclusive possibilities, the probability that one of the events happens is exactly the sum of all the probabilities of an individual event happening.
This property is important to make the mathematical theory work smoothly. What, if any, physical reality it corresponds to, is less clear — are there even an infinite number of mutually exclusive events to consider, in the physical world? There may be, but then again there may not.
You don't have to go to anything as exotic as sigma-additivity of measures to see this distinction. An example much nearer to home is provided by the real numbers. Mathematically, the reals are an inherently infinitary notion; each real number contains infinitely much information, all wrapped up in a neat little package. Does, say, the distance between two electrons truly encode infinitely much information? Perhaps, but I think the jury is still out (in fact the jury may well never come back). --Trovatore (talk) 08:08, 28 June 2010 (UTC)[reply]
So basically it's unknown whether or not sigma additivity is true in physics? Fundamentally what I want to know is if the axioms of probability are actually "true" in real life, so to speak. Axioms are meant to be self-evident, especially in mathematics, is this true for the axioms of probability such as sigma additivity?––Original Poster

Note that probabilities in quantum mechanics are the square of the norm of the wavefunction. The (complex) wavefunction itself is not a probability and does not obey the axioms of probability, even though certain terminology, such as "probability amplitude" might suggest otherwise. 157.193.175.207 (talk) 08:06, 28 June 2010 (UTC)[reply]

Note that sigma additivity of probabilities applies to sets of pairwise disjoint (i.e. mutually exclusive) events. It can appear to fail in quantum mechanics because events that we intuitively expect to be mutually exclusive are not necessarily so. For example, in a double-slit experiment, suppose event A is "particle goes through slit 1 and hits point X on screen", and event B is "particle goes through slit 2 and hits point X on screen". In classical physics we expect the particle to follow a single path from source to screen (even though we do not observe this path) and so we expect A and B to be mutually exclusive; this leads us to expect Pr("particle hits point X on sreen") to be Pr(A)+Pr(B). In quantum mechanics, Pr("particle hits point X on sreen") is not Pr(A)+Pr(B) - wavefunctions add, but probabilities do not. This is not because sigma additivity fails, but because A and B are no longer mutually exclusive - the particle can, in effect, go through both slits (see path integral formulation). Gandalf61 (talk) 08:49, 28 June 2010 (UTC)[reply]
obviously the probability amplitudes do not obey the axioms of probability because the probability amplitudes are not actually probabilities. However, do the probabilities derived from the probability amplitudes obey the axioms of probability? Basically what I want to know, as I stated above, is whether or not the probability axioms are actually true in nature or if they were just arbitrarily chosen. ––Original Poster
Ignoring quantum mechanics, have you tried rolling dice, or something else simple where you can test for yourself whether the axiom has a physically real basis, or are you already past that stage? What exactly are you asking - is this question specifically about quantum mechanical probabilities, or more general?
"However, do the probabilities derived from the probability amplitudes obey the axioms of probability" yes.87.102.11.74 (talk) 12:13, 28 June 2010 (UTC)[reply]
The result obtained from rolling dice is a deterministic system dependent on how you roll the dice. That's why a chose quantum mechanics ––OP
That's a false dilemma. The axioms of probability are not "just arbitrarily chosen" - they are carfeully chosen to provide a consistent mathematical framework that implements our intuitive concept of odds and probability. However, we cannot measure probabilities directly - we can only estimate them from the results of multiple trials. So it is probably impossible to say whether they are "actually true in nature". How would you design an experiment to test whether probabilities "in nature" do or do not conform to the axioms of probability ? Gandalf61 (talk) 12:11, 28 June 2010 (UTC)[reply]
How are the axioms of probability derived? Or are they self-evident, as many axioms in mathematics are? And if they are self-evident, doesn't that mean that they must be true in quantum mechanics?––OP
I think you may have misunderstood the term "self-evident". When we say that axioms are taken to be self-evident, this means that we do not have to prove that an instance of a mathematical structure, such as a probability, conforms to the axioms of that structure, because the axioms are taken to be part of the object's definition. This avoids the danger of infinite regress in mathematical proofs - each chain of reasoning in a proof eventually reaches an axiom, which we can take to be true by definition. "Self-evident" does not means "obvious" - the axioms for many mathematical structures are far from obvious. Neither does it mean "true in reality" - a mathematical structure may or may not be a good model of some part of reality, but this does not affect its axioms. Gandalf61 (talk) 14:00, 28 June 2010 (UTC)[reply]
Hmm, no, I don't really agree with that; that's a bit close to the sort of formalism that reduces mathematics to a meaningless game. It is important whether axioms are true in reality, for cases (like the natural numbers, or sets considered as elements of the von Neumann hierarchy), where the objects they are talking about are well-specified. And I think self-evident axioms do have to be obvious once correctly understood.
The point is that not all axioms are self-evident. There are different sorts of axioms. One sort is the self-evident kind; once you understand what it means, it is intuitively clear that it's true. The axiom of choice is an example of this.
Then there are axioms that are not intuitively clear at all, but for which evidence accumulates that they are actually true. Large cardinal axioms are in this category.
Then there are axioms for which there is no particular evidence, or even evidence against them, but which are convenient in certain contexts (e.g. the axiom of constructibility, Martin's axiom, although some might argue that Woodin's work provides evidence for the latter).
But the "axioms of probability" being discussed here are in still a fourth category, that of "definitions in disguise". When you speak of the "axioms" of a group, what you really mean is the properties that a structure must satisfy in order to be considered a group. Similarly, the axioms of probability are the properties that a function from your event space to the reals must satisfy, in order to be considered a probability measure. --Trovatore (talk) 19:02, 28 June 2010 (UTC)[reply]
Despite your poor opinion of it, formalism is a logical and consistent position and a mainstream branch of the philosophy of mathematics. Your taxonomy of self-evident axioms, axioms that require evidence, convenient axioms and axioms that are "definitions in disguise" seems highly arbitrary to me. To a formalist, all axioms are "definitions in disguise", and the idea of collecting evidence "for" or "against" them is meaningless. Gandalf61 (talk) 21:30, 28 June 2010 (UTC)[reply]
Formalism is understandably very popular among mathematicians who would rather not think about foundational philosophy at all, but it has severe limitations once you start to take it seriously. This is probably not the place to discuss them at length; let me just say that formalism has no satisfying explanation for the apparent coherence of the overall mathematical picture.
I disagree that "to a formalist, all axioms are 'definitions in disguise'" — actually you have to be a realist to some degree to make sense of that notion of axiom. Otherwise, what are you defining? I would say that formalists would more easily place all axioms in the "convenient" category. --Trovatore (talk) 21:50, 28 June 2010 (UTC)[reply]
The question "What are you defining ?" only makes sense in a realist framework. For a formalist, axioms define what they define. A formalist does not expect the structures of mathematics to have any "existence" or reference outside of themselves. If they do happen to have some contingent correspondence to some aspect of the real world, then that is a fortuitous coincidence. I don't see how or why a formalist would categorise axioms as "convenient" or "inconvenient". Gandalf61 (talk) 07:14, 29 June 2010 (UTC)[reply]
So you're saying that the axioms define — themselves? If all you have is the axioms, what is the point of speaking of them as "defining" anything? --Trovatore (talk) 07:26, 29 June 2010 (UTC)[reply]
No, I said the axioms define what they define. The group axioms define a group, and a group is a mathematical object for which the group axioms are true - nothing more, nothing less. For a formalist, all mathematical structures are like that; there is no need or expectation of some real world referent against which the axioms can be compared. Gandalf61 (talk) 12:15, 29 June 2010 (UTC)[reply]
But if there are in fact no mathematical objects, as a formalist would hold, then how can you make sense of the claim that a group is a mathematical object for which the group axioms are true? --Trovatore (talk) 18:27, 29 June 2010 (UTC)[reply]
We clearly have somewhat different conceptions of formalism. No point continuing this discussion - I am done here. Gandalf61 (talk) 09:04, 30 June 2010 (UTC)[reply]
LOL, all I was asking was if the axioms of probability were true in quantum mechanics (to which I still don't have a definitive answer...) Actually it appears that I do have a definitive answer from 87.102.11.74, but he didn't give any reason to support his claim. Does classical probability apply in quantum mechanics? ?––115.178.29.142 (talk) 22:15, 28 June 2010 (UTC)[reply]

{outdent} It's not clear to me that anyone can possibly answer your question. It's a little like saying, we measure mass as a real number, so do masses have the Archimedean property? I can't imagine how you would design an experiment to test that. --Trovatore (talk) 22:19, 28 June 2010 (UTC)[reply]

In that case, what's the logic behind using the axioms we do? And furthermore, is it OK to use classical probability in quantum mechanics?––115.178.29.142 (talk) 00:30, 29 June 2010 (UTC)[reply]
You might want to take a gander at The Unreasonable Effectiveness of Mathematics in the Natural Sciences. I don't think you're going to get any definitive answer here. This is stuff people will be arguing about for a long time. I would note in passing that there's all sorts of stuff in QM of which people could (and do) ask whether/why it's "OK". --Trovatore (talk) 00:39, 29 June 2010 (UTC)[reply]
The probability of finding particle A at position y at time x is 1/5. The probability of finding particle B at position z and time w is 1/10. What is the probability of finding particle A at position y at time x AND particle B at position z at time w?––115.178.29.142 (talk) 01:00, 29 June 2010 (UTC)[reply]
No idea - you haven't given us any information on whether the events are independent or correlated. I carry my umbrella on 1 day in 5 and it rains on 1 day in 10. What is the probability that I carry my umbrella on a day when it is raining ? Gandalf61 (talk) 07:14, 29 June 2010 (UTC)[reply]

Signal processing

[edit]

Could anyone recommend a good textbook on circuits and signal processing? I would like something to read over the summer. 173.179.59.66 (talk) 07:58, 28 June 2010 (UTC)[reply]

Are you looking for an introductory circuits text, or a text on circuits for signal processing? This will significantly change what will count as a "good book" for you. Proakis and Manilakis, Digital Signal Processing, is the "standard" DSP reference. It's very mathematical and expects a solid understanding of discrete mathematics before your start. It also exclusively focuses on digital signal processing. This is the book if you already have a solid background, but it will be totally incomprehensible if you aren't mathematically inclined. (It will also "assume" that you understand how to map a z-domain algorithm back to a digital circuit - an easy task, but one that isn't explained in this book). I also recommend Physical Audio Signal Processing. This book focuses on modeling physical acoustic behaviors with signal-processing approximations, and then implementing those as simple algorithms in software or digital hardware. It also makes for "light reading" although it will jump to extremely mathematical treatments for one or two sections at a stretch. The fields of "circuits" and "signal processing" are extremely broad - and the best books for an electrical engineer would be totally incomprehensible to a non-engineer. If you've never had exposure to even basic circuits, you should start with an introductory text on electronics before you start to worry about signal processing. Fundamentals of Electric Circuits by Alexander and Sadiku is a good one, but if you've even had a cursory circuit training, the first half of this book will insult your intelligence. In any event, the book does cover methods all the way up to frequency analysis, resonance, and I think even does s-domain circuit solutions. If you want to proceed down the all-analog route, you may find a book on analog control theory or RF signal conditioning a "must" - signal processing entails a totally different skillset in the analog domain. For this, you will need Analog Circuits, by Gray and Meyer, or Planar Microwave Engineering. (You can buy this online or in any bookstore... but these are very advanced circuit theory books). Can you specify your baseline knowledge/background a bit more? Nimur (talk) 15:07, 28 June 2010 (UTC)[reply]
There is a course next semester that I'm enrolled to take called signal-processing (they don't have a textbook listed yet). Here's the course description: "Experimental research depends strongly on electrical and electronics instruments. Today, signals from various probes are most of the time transformed into some kind of electric signal follow by some kind of digitization. This course will review some of the concepts that are encountered in the treatment of such electric signals in order to optimize the quality of the measurements. Some of the main Course Topics that will be discussed are:

• dc circuits and networks • Linear circuit elements: R, L, C • Sinusoidal signals: phasors and complex algebra • Filters: High-pass, Low-pass and Resonance • Power, rectification and noise • Fourier methods"

I'm not sure which of the textbooks above fits with the course. And thanks for the swift and detailed response! 173.179.59.66 (talk) 00:28, 29 June 2010 (UTC)[reply]
Based on what you've described, it sounds like Alexander and Sadiku is the book you want. The others might be fun to look at but I think you'll need to work your way up to them. If you find that the Fundamentals book moves too slow, you can really skip or skim several of the first chapters. The other books I mentioned will probably be too advanced if you still do not know the basics of Fourier transforms and linear networks; but in time you'll have the foundations. Electronic engineering is very dependent on a solid understanding of the basics. Nimur (talk) 01:17, 29 June 2010 (UTC)[reply]
Thanks man, I really appreciate your help. 173.179.59.66 (talk) 02:31, 29 June 2010 (UTC)[reply]

Quantam Handwaveing

[edit]

What is meaning of Quantam Handwaving ?  Jon Ascton  (talk) 08:41, 28 June 2010 (UTC)[reply]

See handwaving. Some context would be helpful, but it probably means a plausible but informal argument based on the principles of quantum mechanics. Gandalf61 (talk) 12:01, 28 June 2010 (UTC)[reply]
Maybe, maybe not. I've observed that putting "quantum" in front of anything can change its commonsense meaning quite considerably if done so by scientists (e.g. the meanings of quantum teleportation or quantum computer are not obvious from their names).
In any case, Googling for the phrase seems to show it cropping up somewhat informally in ID/Creationism/Big Bang debates, probably in reference to the theory that? quantum thermal fluctuations serves as a "first cause" in the Big Bang, which seems to be seen as a form of handwaving by ID/Creationist types (that is, you are appealing to quantum mechanics in a vague way to get the answer you want, but it is not very concrete feeling). --Mr.98 (talk) 15:59, 28 June 2010 (UTC)[reply]

Wind chill and perspiration

[edit]

I am trying to understand the effect of wind chill on cooling the human body and its impact on perspiration. Anecdotally I find that when there is a strong breeze when exercising, I seem to sweat much less. I am not sure if they is because I am sweating as much as usual, but that the breeze is helping it to evaporate faster, or whether my body is being cooled by the wind chill, so I need to sweat less, or a combination of both. The article on wind chill mentions that it causing cooling, but doesn't actually go into the details of why that happens. My guess, is that the moving air is having the effect of providing a greater number of "cool" air molecules to absorb heat from the body, much like why a liquid cools objects faster than air. On that basis I would expect that, in fact, the wind chill is cooling the body so that less sweating is required. However, I have also been told that swimmers are advised to still drink a lot because they do actually still sweat. If my theory about the wind chill is correct, it would seem to suggest that water would cool even more and so reduce sweating even more. Any input would be appreciated. Thanks HappyHopper777 (talk) 11:31, 28 June 2010 (UTC)[reply]

I think both effects are significant, and the balance will depend on humidity and airflow over the skin. A person will perceive that they are sweating more under high humidity or in still air, because the sweat accumulates on the body, but people still lose heat through sweating in high winds and low humidity, even when there is no noticeable sweat on the skin. Dbfirs 12:35, 28 June 2010 (UTC)[reply]
It takes energy to convert liquid water into water vapor - that energy comes from the warmth of your skin - so when sweat evaporates, it cools you down. With less humid air, that happens more easily - and in 100% humidity, it doesn't happen at all. When there is no wind - or when you are wearing lots of clothing that traps air - the temperature of the air close to your skin goes up (because it's being heated by your body) and humidity of that air also goes up because it's picking up water vapor from your evaporating sweat. When that layer of air isn't moving at all, you start to feel really hot because sweat evaporation has stopped and the air is warm. Add a little wind - and/or remove clothing to allow some convection (hot air rises) - and the air next to your skin is replaced by fresh air from further away. Now you have cooler AND drier air and your sweat can do it's job. The situation with swimmers is a little different. Air is a really good insulator - so heat passes only slowly through it. Water conducts heat away quickly - so even though your sweat can't evaporate, the water is conducting the heat away very efficiently and you don't overheat so long as the water is cooler than you are. SteveBaker (talk) 12:44, 28 June 2010 (UTC)[reply]
Thanks Steve. That fits with my undestanding I think. That water is a good conductor, compared to air, because there are a lot more molecules available to absorb the heat from the object being cooled. Fast moving air seems to have a similar effect in that, for a given amount of time, it provides a greater number of "cooler" molecules (i.e. molecules with less energy). However, because of the cooling effect of the fast air or water, I would have thought that there would be less need to sweat (because the body could transfer heat more easily to the fast air and to the water). I need to digest what you said above a bit more. Thanks HappyHopper777 (talk) 13:15, 28 June 2010 (UTC)[reply]

Pitot tube constant

[edit]

For an 's' type pitot tube, the manufacturer gives the pitot tube constant as 0.8. Does this mean my actual velocity will be 0.8 times indicated velocity? Since the pitot tube is a straightforward application of Bernoulli's theorem, what causes this constant? Thanks —Preceding unsigned comment added by 125.17.148.2 (talk) 11:38, 28 June 2010 (UTC)[reply]

You are correct in believing a pitot tube is a straightforward application of Bernoulli's theorem. The only error suffered by pitot tubes is alignment error when the axis of the tube is so far out of alignment with the oncoming flow that the pressure in the tube is less than stagnation pressure. I don't know what it means to say a pitot tube has a constant of 0.8. It certainly doesn't mean that your indicated airspeed at sea level is 0.8 (or even 1.25) times true airspeed. For type certificated aircraft there is a requirement that the airspeed indicating system must be calibrated in flight and the error may not exceed three percent or five knots, whichever is greater. (See FAR 23.1323(b))
There is an error in temperature probes used on aircraft because a probe causes the airstream to come to a stop (stagnate) and that raises the temperature of the air in the vicinity of the probe, leading to an error. However, temperature probes are not called 's' type pitot tubes so that doesn't explain the 0.8. If 's' stands for supersonic the 0.8 may be related to the fact that when the aircraft is flying at supersonic speed the pitot tube is operating behind a shock wave so the airspeed sensed by the pitot-static system needs to be processed before giving a meaningful indication of true airspeed. Is your 's' type pitot tube intended for a supersonic aircraft? Dolphin (t) 23:03, 28 June 2010 (UTC)[reply]

No there is nothing so technical about 's' type, it is just more suited for particle laden air than conventional 'l' type tubes. I agree with the alignment problem which may occur but that is something which is to be avoided while measuring and is not an intrinsic part of the tube. Certain sites give some sort of constant less than unity which must be multiplied to the velocity but I dont quite understand the logic behind that. Any sort of non-alignment issues can only give a constant greater than 1. Now the manufacturer does not seem too technically aware apart from the fact that there is 'some constant 0.8'. 0.8 incidentally is roughly what must be multiplied to the centre velocity to give average velocity in duct. But I dont think he is talking about that (and neither am I). So basically what is a pitot tube constant? Thanks —Preceding unsigned comment added by 122.175.68.41 (talk) 16:23, 29 June 2010 (UTC)[reply]

Contents of Cold Pack

[edit]

Do cold packs only contain ammonium nitrate? I had one and reacted it with hydrochloric acid and it formed a precipitate of ammonium chloride, which is much less soluble than the nitrate. Nitric acid should be formed, but it didn't react with copper to form copper(II) nitrate. It turned brown when it was heated and formed a white precipitate. --Chemicalinterest (talk) 13:14, 28 June 2010 (UTC)[reply]

Are you talking about the gels that you pre-freeze before using, or the endothermic ones where you break&shake a room-temperature bag? Our cold pack article gives some possible materials for each one. DMacks (talk) 15:35, 28 June 2010 (UTC)[reply]
Break and shake. It contained a saturated solution of the chemical with a packet of water. --Chemicalinterest (talk) 16:38, 28 June 2010 (UTC)[reply]
At a guess, I would say that it's hydroxyethyl cellulose. It would be in there to increase the viscosity of the resulting solution, so that the cold pack sit more securely on whatever you're trying to cool. To see why that might be helpful, fill and tie-seal a smallish plastic bag with tap water and then try to balance it on your ankle... Physchim62 (talk) 21:03, 28 June 2010 (UTC)[reply]
Could be ammonium chloride rather than ammonium nitrate. You say you expect you started with an already (near-)saturated solution? You don't say how strong your HCl solution was, but it's easy to get HCl much more concentrated than NH4Cl, so when you mix them you boost the overall Cl concentration a bunch and NH4Cl might precipitate. Physchim62 has an interesting point that it might be some gelling agent rather than the actual "dissolves to cool" material. The ice packs I've used really do seem almost water-liquidy. But I'm not sure why adding (presumably) aqueous HCl to a water-solution of hydroxyethyl cellulose would cause it to precipitate, or if an aqueous acid solution of hydroxyethyl cellulose would turn brown and form a white precipitate when heated. Do you have access to silver nitrate? It's the classic test for chloride. Our qualitative inorganic analysis article has many of the classic near-definitive tests fir determining what ions you have (though some require chemicals you may not have access to). DMacks (talk) 14:22, 29 June 2010 (UTC)[reply]
My HCl was concentrated, about 12M. It was viscous, the solution was. the precipitate did seem to have similar solubility to NaCl, so it was probably NH4Cl. The hydroxyethyl cellulose may have pyrolyzed when heated in the superconcentrated solution.
I have no silver nitrate; I wanted to buy it but it was too expensive, $12.95 for a couple grams.
So it doesn't seem like it has ammonium nitrate. It did react with household bleach sodium hypochlorite though (more criteria). --Chemicalinterest (talk) 15:16, 29 June 2010 (UTC)[reply]
Whats the decomp temp of hy eth cellulose? --Chemicalinterest (talk) 15:21, 29 June 2010 (UTC)[reply]
The melting point (not decomp temp) is 140 Celsius. There's no decomposition temperature given, and in any case the stuff would hydrolyse rather than pyrolyse when heated in solution. FWIW 67.170.215.166 (talk) 01:07, 30 June 2010 (UTC)[reply]

documentary

[edit]

I'm looking for good tv documentaries on ants, specifically the Black garden ant. I've already watched Life in the Undergrowth but if just skipped about from different species without giving any really detailed info. I want a documentary that follows the entire life cycle of the black garden ant. Thanks 82.43.90.93 (talk) 14:10, 28 June 2010 (UTC)[reply]

Do OLED displays degrade with UV exposure?

[edit]

Do the organic compounds in OLED displays degrade with UV exposure? Should devices with such displays be kept out of direct sunlight? --70.167.58.6 (talk) 15:00, 28 June 2010 (UTC)[reply]

They do, but only in the really old OLED displays. The early OLEDs were made of PPV, which is easily oxidized by oxygen in the air, especially when catalyzed by UV rays. Most modern OLEDs, though, are made of other materials such as polyfluorenes, indium alloys, etc., which are stable under UV radiation. So if your device is really old, then it's best to keep it out of direct sunlight (and be careful not to scratch the protective coating on the screen); but if it's a new device, then it doesn't really matter. FWiW 67.170.215.166 (talk) 08:10, 30 June 2010 (UTC)[reply]

Most toxic element

[edit]

What's the most toxic element in its pure form? Note that I said toxic rather than harmful, so radioactive elements like uranium and plutonium don't count. --76.77.139.243 (talk) 15:03, 28 June 2010 (UTC)[reply]

Can I infer that what you're really trying to say is most chemically toxic element? Because plutonium, for example, is quite toxic (in aerosolized form), and the fact that the toxicity derives from its radioactivity does not change that (see toxicity). Toxicity does not distinguish between mechanism, but we can, for the sake of argument, ignore toxicity related to radiation, if that is what you are asking about. --Mr.98 (talk) 15:12, 28 June 2010 (UTC)[reply]
Well, discounting radioactives (polonium would easily win if you include radioactives), and going on U.S. permissible exposure limits (PELs) for the element (discounting compounds), the answer is beryllium. Physchim62 (talk) 15:14, 28 June 2010 (UTC)[reply]
Yes, I was going to suggest beryllium, judging from Median lethal dose. Of course, toxicity is tricky—as our beryllium poisoning article points out, there are a whole set of different toxicities and effects depending on the route of exposure. But it's pretty nasty stuff, worse than arsenic and other "traditional" elemental poisons. --Mr.98 (talk) 15:17, 28 June 2010 (UTC)[reply]
PELs (and LD50s) are incomplete descriptors of toxcity, but it's noteworthy that the PEL for beryllium is 100-times lower than the PEL for elemental fluorine; that is, you're allowed to have 100-times more fluorine gas than beryllium dust in the air of a U.S. workplace. Physchim62 (talk) 15:44, 28 June 2010 (UTC)[reply]
There is also berylliosis which discusses the lung disease caused by beryllium. None of the articles explain why it is so toxic however, this paper says it inhibits enzymes that contain magnesium + calcium ions. It can also function as a hapten leading to apoptosis of macrophages in the lungs. 86.7.19.159 (talk) 16:12, 28 June 2010 (UTC)[reply]
See also the last time we talked about this, and other previous on the science and other ref-desks. Just type "most toxic element" into the ref-desk search-box at the beginning of the page. DMacks (talk) 15:39, 28 June 2010 (UTC)[reply]
Yes I remember asking a similar question a while back. One thing that the previous question brought up is that some elements are harmful if they are inhaled, but not all that harmful if they are eaten, since they are not absorbed into the body. So your question on which element is most toxic is going to depend on the manner of exposure. Googlemeister (talk) 16:19, 28 June 2010 (UTC)[reply]

High level nuclear waste

[edit]

I thought all high level waste can be recycled and reused? But according to High level waste, it doesn't mention recycling anywhere in the article. Just disposal. 148.168.127.10 (talk) 15:20, 28 June 2010 (UTC)[reply]

Actually it does mention it quite a bit, but perhaps in a term with which you are not familiar: nuclear reprocessing. --Mr.98 (talk) 15:42, 28 June 2010 (UTC)[reply]
High-level waste is the highly radioactive waste material resulting from the reprocessing of spent nuclear fuel so the stuff is already recycled. Only the material which is not good for a nuclear reactor any more is disposed. There are highly radioactive elements not usable for fission especially the lighter elements produced by fission --Stone (talk) 15:53, 28 June 2010 (UTC)[reply]
The terminology seems to be: once you get it out of a reactor, it is spent nuclear fuel (SNF). If you reprocess that, the result is high-level waste (HLW). Now what's confusing is that our article on radioactive waste does not differentiate between the two even though they are quite different from an economic and a physical standpoint. I suspect this is because we are going by US definitions and the US is kind of muddy about such things, since it does not reprocess except for military purposes (and hasn't done that for awhile, I don't think), so we treat them as being basically the same from a waste perspective. In France, though, the distinction would be important, as they do civilian reprocessing. --Mr.98 (talk) 16:14, 28 June 2010 (UTC)[reply]

wormholes

[edit]

I would like to win the nobel prize by building two wormholes, one leading to a very hot place (e.g. surface of the sun), one leading to a very cold place (lots are available), and use the heat difference to gain, in human terms, "free" energy. Also insofar as the wormholes are unstable and require energy, some of this energy could be invested in keeping them stable through some means. The key thing is that the wormholes don't have to be very big, now do they? An infitessimal point of high heat is enough to drive an engine, isn't it? Now, I would like to know how to make a wormhole, as I've looked on the Internet and no manufacturer sells such a solution, at any price. It will be very hard for me to find investors if I cannot even quote the price of what I am trying to acquire! Therefore, I would like to get a price estimate for building one very small, but stable wormhole. If it is in the billions of dollars, rather than a few hundred thousand or million, can someone explain why I have to pay so much, and can I somehow get around those billions by doing some of the work myself or in-house? In all, I would like to approach the subject much as an investment project in building a new power plant, however my real personal interest is in the Nobel prize. Thank you kindly for any help you may have to offer toward my goals. Very truly yours,
Philius Botsch —Preceding unsigned comment added by 84.153.206.127 (talk) 16:25, 28 June 2010 (UTC)[reply]

It cannot be done for any amount of money with current technology. Even if you had all the money of every intelligent race in the entire universe, you couldn't purchase a wormhole machine because they do not exist. -- kainaw 16:29, 28 June 2010 (UTC)[reply]
I did not mean in the lower-class sense of purchasing something on the shelves, on display, or in the catalogues, I meant in the bespoke sense of paying for the work that will lead to the realization. Of course, in the former sense, if currently no mechanical watch includes a hydrometer, then even a hundred billion dollars will not buy you one off of any shelf or out of any display on Earth or anywhere in the Universe. To a prince, of course, far less than a billion is needed if he is to have a mechanical watch with a hydrometer. It is in this latter sense that I ask how much it will cost. 84.153.206.127 (talk) 16:37, 28 June 2010 (UTC)[reply]
I believe Kainaw understood the intent of your question. There is no reason to believe that a way of creating and maintaining a stable wormhole can be developed for any amount of money. This is more like "building a watch that makes time run backwards" than "building a watch with a hygrometer." -- Coneslayer (talk) 16:44, 28 June 2010 (UTC)[reply]
I was under the impression that it was fully possible under the standard model of physics, ie you do not have to change physics if you are to have a wormhole. What are the mechanisms being proposed that would allow for the very wormhole standard physicists (such as Hawking, Greene, etc) talk about, and, engaging in a bit of blue-sky thinking, what are - let me put it this way - what sum has been paid for comparable achievements (physically allowed, however nowhere realized upon inception, and with uncertain prospects) that have in fact been brought to fruition? (quantum computing, etc, etc, etc). That might give me a baseline to base my calculations on. 84.153.206.127 (talk) 16:56, 28 June 2010 (UTC)[reply]
You are under a false impression. As I say below, you need to add exotic matter to the standard model if you are going to get stable wormholes and there is no reason to believe exotic matter is possible (and plenty of reason to believe it isn't - for example, wormholes can mess with causality in ways that a lot of scientists suspect is impossible). --Tango (talk) 17:02, 28 June 2010 (UTC)[reply]
Thank you for dispelling my false impression. In your estimation, in that model which is the standard model with the addition of exotic matter (a model that granted might not describe our universe) what mechanisms could cause a wormhole? (if you know) or, if you don't know, what methods are good candidates to potentially cause wormholes. Thank you kindly for your input. 84.153.206.127 (talk) 17:12, 28 June 2010 (UTC)[reply]
If you want a Nobel Prize, you're going to have to earn it. Generating free energy in the way you describe would probably be worth a Nobel Prize, but the key part is creating the wormholes. Once you can create wormholes, using them to make a heat engine is trivial by comparison. Current theories suggest that useful wormholes (ones that remain stable for long enough for something to pass through, including energy) can only exist if you have exotic matter, in particular matter is negative mass. At the moment, we have no evidence that such matter is even possible. We have no idea how to create it. If you can solve that problem, you'll probably get a Nobel Prize. --Tango (talk) 16:42, 28 June 2010 (UTC)[reply]
Thank you for your detailed response. When you say "we have no idea how to create [it]" where "it" might not be possible, can you tell me which candidate methods are in the minds of serious, credible physicists who would even write the words "useful wormholes" in succession? No idea = no candidate methods at all, if I list everything I can think of that is physically possible to do with the objects of the universe, they would reply to each one "not worth a thought"? 84.153.206.127 (talk) 17:10, 28 June 2010 (UTC)[reply]
Assuming you're not just trolling, which I suspect to be the case, then no such method exists to create a wormhole, because there is very good evidence they don't/can't exist. No amount of money will buy you something which is impossible. Further more, if you want the Nobel Prize, it's usually a prerequisite that you develop the technology and theory yourself or as a group, not asking others to do the work for you and then claim it as your own. Lastly, I suspect this sort of theory would obtain several Nobel prizes. One of discovering the existence of wormholes, another for creating one, another for using the system to generate free energy. Good luck to you, sir. Regards, --—Cyclonenim | Chat  16:57, 28 June 2010 (UTC)[reply]
Thank you for your detailed response. I believe you are asking me to make a jump from your justified reasoning "because there is very good evidence they don't/can't exist" for the statement "no such method exists to create a wormhole" to the unjustified statement "there is no candidate method to create a wormhole" on the same grounds. Non sequitur. Clearly, there is something that impels physicists to talk seriously of wormholes: in their blue-sky thinking, what mechanisms could even be candidates for creating these? Kind regards, PB 84.153.206.127 (talk) 17:10, 28 June 2010 (UTC)[reply]
Some physicists believe that wormholes can exist. Some do not. Some believe that they do exist. Of those who believe that can and do exist, they mostly all believe that wormholes only exists for less then a nanosecond and are extremely tiny. To get the science-fiction version of a wormhole, you need tons and tons of energy. You can't use real energy. You have to use "negative energy". This is the energy that is the opposite of real energy. You pump negative energy into the wormhole to blow it up like a balloon. As you'd expect, negative energy doesn't react nicely with real energy, so we are basically discussing an attempt to keep a violent explosion much more powerful than a supernova under control. Further, there is no proof that negative energy exists. There is no way to detect it or create it. There is no way to detect a wormhole or predict where one will be. There is no way to inject energy that you cannot create into a wormhole that you cannot find. There is no way to control the wormhole if you had the energy you can get and the wormhole you can't find. In other words, this is all theory. It is not reality. Your best bet is to fund the creation of a time machine to jump millions of years into the future and bring back their technology since we are much closer to time travel than wormhole creation and use. -- kainaw 17:19, 28 June 2010 (UTC)[reply]
Thank you, your response has finally brought home to me just what I have to do. 84.153.206.127 (talk) 17:34, 28 June 2010 (UTC)[reply]
Just so we're clear, what you "have to do" is give up. At present, this cannot be done, and unless you are the most brilliant scientist in existence (and sorry, but I do not get that impression), then you will not succeed in this monumental, probably impossible-at-present, task. Regards, --—Cyclonenim | Chat  17:55, 28 June 2010 (UTC)[reply]
Seriously, I am glad you guys were not talking to Gallileo or Newton when they were doing science. If the guy wants to try to discover and then create wormholes, I say go for it. Just understand that the odds of success are low (but not zero). Googlemeister (talk) 18:25, 28 June 2010 (UTC)[reply]
Galileo and Newton were both experts on the existing science that they ended up overthrowing (actually did Newton really disagree with the established position on anything?). The OP has much to learn about existing science before he can hope to create new science. --Tango (talk) 18:30, 28 June 2010 (UTC)[reply]
I talked with God - it took 45 minutes of prayer - and you guys are absolutely right. He told me that wormholes are an impossibility. Everything several of you said above is exactly right, and so I must abandon this avenue of both basic research and development. Thank you for bringing me to the point where I seriously questioned my premises and invested the time to get a definitive answer. I hope you will be as helpful the next time I have plans on an equal scale. Very sincerely yours, Philius Botsch. 84.153.206.127 (talk) 18:43, 28 June 2010 (UTC)[reply]
Thanks for the trolling. I really hope I'm not the only person who can see that. Regards, --—Cyclonenim | Chat  19:06, 28 June 2010 (UTC)[reply]
You are not alone, at the very least it would be good if people understood that volunteers on this desk do not do it out of a desire to satisfy someone elses every whim, or do 99.9% of their work for them. But hey! who knows?Sf5xeplus (talk) 21:23, 28 June 2010 (UTC)[reply]
Re: Newton: postulating gravity as an "occult force" was considered pretty scientifically and philosophically controversial at the time. The issue, basically, was that Newton proposed gravity as a force that acted at a distance for reasons he did not understand. This was considered kind of shady by people who considered themselves to be serious thinkers. It also didn't help that Newton basically thought that the entire universe would fall apart pretty quickly if God wasn't actively holding it together constantly. In the long run, though, Newton's laws work pretty dang well, even if you don't know what the big G "really means" in physical terms. Even today—well after Einstein gave us a better explanation of what is going on with gravitational force—we are still trying to come up with a wholly satisfying explanation (i.e. quantum gravity). So yeah... Newton was controversial, definitely disagreed with the established position on a lot of things (and not all of which he published on). --Mr.98 (talk) 19:51, 28 June 2010 (UTC)[reply]
What makes the above question amusing but not very encouraging is 1. the desire is to do something that is assumed will be so scientifically revolutionary that it would warrant a Nobel Prize, 2. the question asker happily admits he has no idea how this could be done, and 3. the question asker would like us to supply the means of doing it. This is not a serious endeavor. If we knew how to do it, and thought it would work, why wouldn't we do it and win the Nobel ourselves? We aren't running a charity here. Or any other scientist for that matter, including those who have spent their entire lives dreaming about the physics of wormholes. I'm not saying it can't be done, but you aren't going to do it the way you're trying to do it. If there is a Nobel to be won, it won't be won by asking us to do the work for you! --Mr.98 (talk) 19:54, 28 June 2010 (UTC)[reply]
Actually, we are running a charity here... --Tango (talk) 20:16, 28 June 2010 (UTC)[reply]
The goal of Wikipedia is to provide information. We provided it. What the OP does with it is his/her business. Some of our OPs are serious and legitimately curious individuals who make good use of the free service we provide. Some OPs are giggly adolescents who don't realize the amazing learning tool that they have been given free access to. That's fine; we provide high-quality responses anyway. The world has ways of sorting individuals for us, we don't have to waste our time doing that. All we do is provide scientific references. OP: read Physics, if you actually have even a slight inclination or interest. Nimur (talk) 21:37, 28 June 2010 (UTC)[reply]
If you have a large pile of money - you could spend it on paying a bunch of physicists to try to build you a wormhole - but that is an unbelievably risky thing. We don't know that wormholes can exist - if they can, we don't know whether or where any can be found naturally, we don't know how to create them, even if we had one, it would probably be smaller than an atom - and the only way to make it bigger might be to somehow (we don't know how) feed it with 'negative energy' - which may also not exist, not be manufacturable, etc, etc. Safe to say, the probability of getting any return on an investment is microscopically tiny. You have (maybe) a one in a trillion chance of getting a wormhole by spending (let's say) a trillion dollars in research. The other 999,999,999,999 times, you blow all the money and have nothing but a lot of REALLY interesting (but useless) research papers to show for your money.
You can make money far more reliably by (for example) investing in electric car design or fancy new solar panel designs. Even a relatively modest (millions of dollars - but not billions or trillions) would produce a really good probability of making a decent profit on your investment. The problem here isn't about how clever your initial idea is - it's about risk. SteveBaker (talk) 23:37, 28 June 2010 (UTC)[reply]

Battery chemistry labs

[edit]

Other than UT Austin where are the best battery chemistry laboratories, who has been running them, and what are they known for? 208.54.14.26 (talk) 16:27, 28 June 2010 (UTC)[reply]

The Cui lab at Stanford University got a lot of press a few years ago for some nano-wire lithium ion battery technology. They are a materials-engineering and chemistry research group. You can read brief introductions to their resesarch here and a list of scholarly peer-reviewed papers here. Nimur (talk) 21:42, 28 June 2010 (UTC)[reply]

Photonic computers

[edit]

Why aren't photonic computers commercially available? They're much smaller and faster than electronic computers, so there would seem to be demand for them since people continue to buy new, faster electronic computers. --76.77.139.243 (talk) 16:42, 28 June 2010 (UTC)[reply]

I trust you've read optical computing? The last paragraph in 'Misconceptions, challenges and prospects' discusses why it is not in widespread use just yet. Regards, --—Cyclonenim | Chat  16:48, 28 June 2010 (UTC)[reply]
They use more power, but aren't they still faster despite that fact? --76.77.139.243 (talk) 16:58, 28 June 2010 (UTC)[reply]
It's not just about power and speed. Electronic computers combine multiple signals to perform complex computational tasks. Photons don't interact with each other (in this case) and so you can't combine signals easily. Regards, --—Cyclonenim | Chat  17:53, 28 June 2010 (UTC)[reply]
I don't think there is sufficient motivation to develop the technology yet because we haven't reached the limits of what we can do with electronic computers. There are at least a few more generations of electronic technology to go before we need to worry about developing photonic computers and the development cost of better electronics is much cheaper than developing photonics from scratch. --Tango (talk) 17:05, 28 June 2010 (UTC)[reply]
Our optical computing article is full of glaring errors and scientific inaccuracies. I'll try to rework it over the next night. For example, "Light, however, creates insignificant amounts of heat, regardless of how much is used." This is totally incorrect. Radiative heating occurs at all frequencies of electromagnetic radiation. Other discussion about interference, photonic logic, and so on, are all in need of attention. I would not recommend this article as a source of information until it has gone through significant editing with a reliable source. I'm hitting the library tonight to get a book on optical computing. The article does not seem to convey that light is just a different frequency of electromagnetic waves; some different physical phenomena are more common, and different materials have more desirable properties; but nothing is fundamentally different between optics and lower-frequency electromagnetic waves. Nimur (talk) 20:09, 28 June 2010 (UTC)[reply]
But electronic computers use electric current, not electromagnetic waves. Rmhermen (talk) 20:43, 28 June 2010 (UTC)[reply]
That's debatable. Modern computers use electromagnetic signaling - current is a parasitic effect. If we could design transistors that switched without any transient current, many of our power-density problems would be solved. E.g. zero-current switching. We don't use the current for anything - it is like "friction" in an engine, and only serves to dissipate energy. The work of calculating is done by switching signal-levels. At the fundamental level, information theory dictates that we must dissipate some tiny quantity of energy in order to store information. (This is sort of the "2nd law of thermodynamics" as applied to information computation, [1]). But the overwhelming majority of the electric current in a modern VLSI system is there because our transistors are "leaky." See the Power section and the logic section of our CMOS article, for example. Nimur (talk) 21:06, 28 June 2010 (UTC)[reply]
So Nimur wants to debate with Rmhermen... zero-current switching is an emi reduction technique that has nothing to do with computing. It is not only information theory and leakage that dictate some current flow in switching signal-levels. Any real conductor has capacitance and a flow of current is needed to change the voltage (= binary logic level 0 or 1) on it. Cuddlyable3 (talk) 12:18, 29 June 2010 (UTC)[reply]
I do not want a debate. You can read how information is conveyed in CMOS logic at the CMOS article. I can recommend several good books in addition. Digital Design: Principles and Practices ([2]) has an entire chapter dedicated to introducing you to the electrical behaviors and signal pathways in digital circuits. CMOS RFIC will introduce you to a more quantitative analysis of the characteristics of a modern VLSI circuit's electrical properties. Yes, any capacitor requires a current in order to charge. And it must by necessity dissipate that current as a resistive loss, or else it oscillates and information is not stored on it. Right now, the state of the art is such that the actual current is much higher than it theoretically could be, due to additional parasitic losses and leaks in the device. (In a photonics circuit, that would be analogous to "dielectric losses" - which are even worse than the thermal losses at VLSI scales). Any technique that improves CMOS switching current will reduce dynamic power dissipation in a digital circuit, whether the signal is propagated by light-frequency or RF. At present, though, there is no "CMOS" for optical frequencies because there are no practical optical transistors. Instead, photonic computer research usually focuses on other mechanisms to store information, such as in the form of frequency modulation, using nonlinear materials to perform frequency mixing. These techniques are at present not sufficiently developed to build a VLSI system. That is the primary reason we do not have optical computers - they can not be built in a way that their device counts compare to RFICs. Nimur (talk) 16:31, 29 June 2010 (UTC)[reply]
Are we talking about fibre optic cables to transmit computer signals instead of using electricity? It would require electricity to produce the light photons. ~AH1(TCU) 22:12, 2 July 2010 (UTC)[reply]

Least dangerous compound

[edit]

I noticed that even seemingly harmless compounds, such as sodium bicarbonate, don't have all 0 for their NFPA 704; is there anything which is ranked 0 in all three categories? --76.77.139.243 (talk) 17:09, 28 June 2010 (UTC)[reply]

Lotsa stuff. See for yourself. --Ouro (blah blah) 17:51, 28 June 2010 (UTC)[reply]
Water? --Chemicalinterest (talk) 22:00, 28 June 2010 (UTC)[reply]
I was surprised myself! --Ouro (blah blah) 05:19, 29 June 2010 (UTC)[reply]
Am I missing something? The above ref lists water from "Mallinckrodt-Baker" as having a 0 for all three Nil Einne (talk) 09:40, 29 June 2010 (UTC)[reply]
Our article on Properties of water had it down as 0-0-1. Checking the criterion for R=1, "Normally stable, but can become unstable at elevated temperatures and pressures", water is obviously R=0. I've changed the infobox. Physchim62 (talk) 09:57, 29 June 2010 (UTC)[reply]
Ah okay I checked the link to water and couldn't find anything about the NFPA from a cursory glance/search so gave up although I wondered if there was an article which covered the chemistry in more detail. From the earlier ref, it doesn't list any NFPA for "Water Deionized and Bacteria Filtered" from CMS. I've been thinking that may be part of the reason is some feel imparting a NFPA rating on water particularly for Instability/Reactivity is questionable since the definitions are partially based on the reactivity with water Nil Einne (talk) 11:50, 29 June 2010 (UTC)[reply]
However, it would not be entirely accurate to say that water is harmless. See water intoxication, dihydrogen monoxide hoax and flood. ~AH1(TCU) 22:09, 2 July 2010 (UTC)[reply]
Also drowning, storm surge, and tsunami. Am I forgetting anything? 67.170.215.166 (talk) 00:29, 3 July 2010 (UTC)[reply]

Applications of Wormholes

[edit]

The last question got me curious about wormholes, and what they could potentially be used for. Assuming they exist, and they work, how many ways can they be used? The possibilities are enticing. Like what was mentioned before, they can be used as a heat engine, faster than light travel, time travel, weapons - I would assume sending a bomb through a wormhole would be pretty powerful, but you could also just open one mouth near a target on earth, then open the other in the vacuum of space, and you have a small black hole that sucks everything into it. 148.168.127.10 (talk) 17:46, 28 June 2010 (UTC)[reply]

By the word "applications", I assume you mean "worthwhile applications". Keep in mind that IF wormholes exist and IF we can find one and IF we had technology to keep one open so it could be used, it would take massive amounts of technology and energy to do so. So, if you wanted to send a bomb through one to blow up someone's city, you'd first have to smuggle in all the technology to hold it open on the city's end. Similarly, to open it to the vacuum of space, you'd have to smuggle in all the technology. It is the equivalent of shooting someone with a rifle by asking them to aim their end of the barrel while you pull the trigger on the other end. Further, if you had the technology in place to hold open a wormhole, you would have the technology in place to simply destroy whatever it is you want to destroy. -- kainaw 18:00, 28 June 2010 (UTC)[reply]
(ec) I'm not sure they would work well as weapons. I doubt you could just "open one mouth near a target". You would probably have to drag one mouth from your lab to the target, which makes it difficult to use as a weapon (you could just take a big bomb instead). Faster than light travel is the obvious application, although once again you would probably have to drag one mouth there at slower than light speeds, so the first journey to your destination would have to be by the slow route. Time travel is also a possibility, although it probably couldn't be used to travel back to before you created the wormhole. The heat engine idea would probably work - it's sort of like putting a solar panel right next to the sun, but you don't need to transfer the electricity back to Earth. Faster than light information transfer is probably another option - no more lag on satellite feeds. (You will notice that every idea I've mentioned has "probably" in it - the existence of stable wormholes requires some change in our understanding of physics and there is no way of knowing what else would change with it.) --Tango (talk) 18:05, 28 June 2010 (UTC)[reply]
you've smuggled your response to my heat engine idea down here, but I'm on the case. Just so you know, my proposal is for there to be TWO wormholes, and so you don't have to transport the electricity back to earth, because the heat exchange happens right in the Philius Botsch Power Center on Earth. In the power engine, you have both the other end of the "hot" wormhole (with its other mouth on the surface of the sun, or as far away from a sun as you require, e.g. in orbit around a sun at a certain (perhaps low) distance) and the other end of the "cold" wormhole (with its other mouth on some cold gaseous planet, for example). Then you just run the engine off of the heat difference. No interplanetary power transportation required! Very sincerely yours, Philius Botsch 84.153.206.127 (talk) 18:40, 28 June 2010 (UTC)[reply]
Yes, I understood the idea and, if you can get the wormholes, it should work. There is no real need for the "cold" wormhole - the Earth is cold enough. --Tango (talk) 19:10, 28 June 2010 (UTC)[reply]
with that mentality, you should work for an oil company! The reason for the cold wormhole is to stave off at least some of the global warming associated with the scheme. Kindly, T. Philius Botsch84.153.206.127 (talk) —Preceding undated comment added 19:15, 28 June 2010 (UTC).[reply]
You can use it for time travel. If you have one wormhole sitting there, and another in orbit, time dilation will cause them to slowly get out of sync. After a while, you can bring them close together, and put an optic fiber through it. If you send a signal through the fiber to the younger end, it will have come out of the older end a tiny fraction of a second. If you did it right, it will then go back to the other wormhole in a tinier fraction of a second, sending the signal slightly back in time. If you stick an amplifier along the optic fiber, you can send the signal arbitrarily far back. If you want to use it for energy, put one wormhole above the other, and drop something through it. — DanielLC 04:38, 30 June 2010 (UTC)[reply]
As a fan of the Command & Conquer series, I seriously wonder if the chronosphere (what, no article?) makes use of wormholes? 67.170.215.166 (talk) 08:20, 30 June 2010 (UTC)[reply]

asteroid mining

[edit]

If hypothetically, there was an asteroid that was made of solid platinum, 99.9% pure, what is the largest ΔV for the break even point to send the platinum back using current technology?

For simplification, we can assume that we don't need to crash chunks of metal into earth, we have a nice space station in geosynchronous orbit. Also, despite bringing tons of platinum back, the price is not going to crash, and the mining equipment is already on site. Googlemeister (talk) 18:38, 28 June 2010 (UTC)[reply]

Assuming you throw out economics ("the price is not going to crash"), you can spend whatever you want to get that much platinum on hand and you'll still be able to buy half the planet. Say you get this asteroid at 1km across -- there are 1-2 million asteroids that size or larger in the main belt. That's 4 quadrillion cubic centimeters of platinum. 90 trillion kg of platinum. At $1500/oz, that's $4.5 quintillion USD of market value. By way of contrast, world GDP is $60 trillion USD. So spend whatever you like. No meaningful answer is possible. — Lomn 19:01, 28 June 2010 (UTC)[reply]
If you brought that amount back, the price of platinum would crash to that of for example water. 92.24.188.76 (talk) 19:27, 28 June 2010 (UTC)[reply]
I disagree that no meaningful answer is possible. For example, to lift things into low earth orbit costs somewhere around $10,000/lbm. that weight of platinum (if at $2,000/toz in reality is is like $1600/toz) is worth around $29,000, so obviously LEO is feasible, but the ΔV to get to mars would mean you can not put something on mars for $10,000/lb, it will cost more, so at what distance does that price equal that of platinum? Googlemeister (talk) 19:35, 28 June 2010 (UTC)[reply]
That seems to be a completely different question. The platinum asteroid is already in orbit; it needs negligible delta V to reach Earth. I don't see where a launch from Earth enters into the picture. — Lomn 20:37, 28 June 2010 (UTC)[reply]
(Edit Conflicts) No, similar question and not negligible, because as a first-order approximation*, if it takes X delta V to get something from the Earth's surface to LEO (or GEO, or Mars, or wherever), it will take the same delta V to get it from LEO (or wherever) to the surface: delta V is the difference in velocity between two different orbits (mathematically, everything is in an orbit) and therefore equals the change needed to get from one to the other, and is a scalar quantity independent of which direction the change is, so it will take the same amount of energy (hence, roughly, cost) to move a tonne of platinum from the Asteroid Belt to Earth as it would to move a tonne (of anything) from Earth to the Asteroid Belt. The article Delta-v budget may be helpful.
Since you've stipulated a return only to a GEO station (which presumably has some pressing need for vast quantities of platinum) you'll save the considerable delta V between GEO and Earth's surface (remember, there is a good deal of delta V even between LEO and GEO, and as Robert Heinlein famously said, LEO is "halfway to anywhere"), and need only consider that between the asteroid and GEO: this will depend heavily on the asteroids's initial orbit, and can be minimised by a carefully timed Hohmann transfer orbit plus additional necessary manouevres to allow for whatever degree of non-coplanarity is involved.
(*I say "first order approximation" because this ignores some cost of moving some of the fuel necessary for subsequent manoeuvres (see Tsiolkovsky rocket equation), the possible use of aerobraking, etc, but for this initial analysis it's close enough.) 87.81.230.195 (talk) 21:46, 28 June 2010 (UTC)[reply]
It takes a lot of thrust for a sustained time to get an object from the Earth's surface into a low Earth orbit. It clearly does not take a comparable sustained thrust to get it out of low Earth orbit. If there is an atmosphere at the destination, spacecraft do not "back down on the rocket" like in old space opera sci-fi films of the 1930's. (Granted, they did that on the Moon where aerobraking was impossible). All that is required is a brief and relatively weak retro-rocket burn for something in low Earth orbit to decrease the velocity by 1% or so. The object (space capsule or shuttle) then drops enough closer to the surface that air resistance slows it, while the ablative heat shield (on older Apollo capsules and such) or the insulating ceramic tiles (on the shuttle) heat up. A cargo mined from an asteroid would need to be sent on a carefully chosen approach to Earth for aerobraking to work. If the angle of approach is too direct, it will mostly burn up like a meteor. If it is too shallow, it will skip off on another trip through the solar system. If the angle is just right, and it has an ablative heat shield large enough relative to its mass, it will fall in a descending path partway around the Earth and land relative unscathed in the desert where it can be recovered. The Apollo missions returning from the Moon got to the ground this way. For a cargo of metal from an asteroid, a hard landing which does not hit someone or their property and does not bury it too deep in the ground would be a windfall. A parachute might not be necessary (at least not a huge one required for a gentle landing) if the payload is a hunk of metal. We have a very long history of using pass=bys of various planets to aim a spacecraft toward some other planet, and of achieving precise approach paths. No space pilot on board would be necessary or desirable. Since the cargo is starting a long way off, relatively small thrusters could make the midcourse corrections needed for a precise reentry trajectory in relatively short bursts. Edison (talk) 18:02, 29 June 2010 (UTC)[reply]
Which is why I specified a first order approximation that omitted considerations like aerobraking (by implication, between LEO and Earth, which, as the OP had already specified a return only to LEO, was not directly relevant to his question). The OP seemed to think there would be negligable delta V between an asteroidal orbit and LEO, which I suggest is incorrect and which I was trying to refute. 87.81.230.195 (talk) 21:13, 29 June 2010 (UTC)[reply]
Where are you getting that the asteroid is in orbit around earth??? Obviously if that was the case, then the question would not make sense. If the asteroid was in orbit around the sun at the same distance as mars, though the ΔV is going to be higher. At what ΔV does this become uneconomical? Googlemeister (talk) 20:55, 28 June 2010 (UTC)[reply]
I should have been more clear: the asteroid is in solar orbit. Orbital mechanics allow for absurdly efficient transfer orbits. Given the vast amounts of handwaving already present in your assumptions, you cannot find a breakeven point. On the other hand, given reasonable assumptions, you also can't find a breakeven point. It's nicely symmetric that way. — Lomn 21:10, 28 June 2010 (UTC)[reply]
You know, if you are going to be obtuse, I would rather you not answer. Googlemeister (talk) 21:20, 28 June 2010 (UTC)[reply]
The assumption of fixed price is seriously flawed. There is no need for 90 trillion kilograms of platinum. What would we do with it? Who would pay for it? How could you expect to sell all of it at current prices? That would be 10,000 kilograms of platinum for every member of the planet, with 10 trillion kilograms left over. How could you expect a reasonable market to exist for this quantity of metal? If you want to justify space travel economically, you can't throw economics out the window. Nimur (talk) 21:02, 28 June 2010 (UTC)[reply]
Obviously a fixed price is not accurate, but then the question would not be answerable because you could not tell me how much the price would be impacted by adding 50 tons, or 100 tons or 500 tons of platinum to the market could you. And I never said the asteroid was 9e16 kg. Googlemeister (talk) 21:05, 28 June 2010 (UTC)[reply]
You're right. Specific numbers are tangential, since this is all hypothetical anyway. But the point is, space travel is rarely justified in the same way as trade down here on earth. If bananas are cheaper on another continent, you can go buy them there and ship them here. But space travel involves going someplace where earth economics are meaningless - things only have "value" if they contribute to some objective. So, if your objective is "get lots of platinum", then you have to demonstrate that the best way to do that is space mining. It's pretty unlikely that any resource is easier to get in outer space than down here on earth - so mining for commodity metal is not a very good objective. Most of human space exploration has been justified in terms of expanding our understanding of the universe, not in terms of economic benefit (though some politicians justify space exploration because of the peripheral technological and economic development it does create here on earth). Nimur (talk) 21:28, 28 June 2010 (UTC)[reply]
Find a large chunk of the platinum and than build a very simple catapult on it an throw 1 meter size balls on trajectory which will collide with the earth. Some 50% will evaporate, but that is not a big problem. Cheap an easy. Calculate 10 more likely 50 Ariane 5 launches each 250 mio Euro and another 10000 mio Euro for the program to bring people there and back (mining is not a job for robots). This makes a lot of platinum you have to get back.--Stone (talk) 21:27, 28 June 2010 (UTC)[reply]
The question isn't "How much should I spend to bring back an entire platinum asteroid?" It's "If I'm harvesting some small portion of the asteroid, how much fuel can I afford per ounce of platinum? And is that enough to bring that ounce of platinum home?" APL (talk) 02:00, 29 June 2010 (UTC)[reply]
What complicates this stuff is that some materials which are essentially worthless on earth (air, water, dirt) are phenomenally valuable when you put them someplace else. A ton of air can be had for free down here on earth - but just 200 miles away in low-earth-orbit, at $10k per pound of launch costs, that same ton of air is worth $20 million. Just taking an icy asteroid and smacking it into the equator of the moon or Mars - or getting it into any kind of stable earth orbit would produce a resource that would make a spectacular difference to the future of humans in space. In that sense, the mere fact of where you place this huge pile of metal is more important than what it's made of. Platinum is a bit of a pain to work with, and it's kinda heavy. Aluminium, water, oxygen or even reasonably fertile dirt would probably be worth more in those kinds of quantities because while platinum's value is due to it's rarity (which you're about to destroy), the value of other materials is due to the amount of fancy technology and rocket fuel you're saving by not shipping it up from Earth - and that's something whose value would be much harder to erode. SteveBaker (talk) 23:22, 28 June 2010 (UTC)[reply]

The article Asteroid mining might help. Cuddlyable3 (talk) 11:56, 29 June 2010 (UTC)[reply]

Getting rid of freezer odour

[edit]

A couple of weeks ago I had the unfortunate experience of finding out my freezer had accidentally become unplugged (thanks to the stupidly short mains lead) and the contents had been in there rotting for at least a week. I just about managed to bag and bin the contents without being physically sick (the smell was so bad it would have offended the devil himself) and have cleaned the thing - thoroughly - several times, with bleach, disinfectant and various other such products but to no avail. The smell was on my hands for days and even white spirit (which is usually pungent enough to mask any other smells) only brought a temporary respite.

I then put the freezer outside with the door off and left it there for 2 weeks (no rain luckily) and there is still a conspicuously unpleasant odour, although a shadow of its former self. Is there any way I can exorcise this satanic stench permanently or is it best to get rid of the freezer? —Preceding unsigned comment added by 94.197.153.113 (talk) 19:00, 28 June 2010 (UTC)[reply]

You're probably going to have to partially take apart the freezer. There might be some rot hidden behind some of the plastic panels. Once you remove all the bulk rot material you'll want to force the bacteria to finish the job - i.e. get the rot going at high speed, make the freezer warm and humid, but aerobic (i.e. let oxygen in). Also, once you turn it on the cold temperature will probably stop the smell. And next time (hopefully not :) buy a gas mask in a hardware store. Ariel. (talk) 19:11, 28 June 2010 (UTC)[reply]
(ec)My freezer also failed while it was loaded with fish. I filled a window cleaner spray with undiluted chlorine bleach, sprayed every internal surface including the seals and latch, then shut the door for a day. After what had gone before, the lingering slight chlorine smell was a blessed relief. Cuddlyable3 (talk) 19:13, 28 June 2010 (UTC)[reply]
I agree that you need to let the bacteria finish the job they started. They are breaking the remaining food residue down and making the stink in the process - but eventually, they'll run out of stuff to decompose - and then you're done. The more stuff you can remove yourself, the faster that'll happen. Pay particular attention to small crevices, screw heads and places like that. The large, flat, smooth surfaces are easy to clean. When something similar happened to me, I found that leaving it outside (opened) and in sunlight helped.
Oh, and at the risk of sounding obvious, may I suggest getting an extension cord so that the same problem doesn't occur in the future? After all, you said that the freezer got unplugged because of a "stupidly short mains lead" -- sounds to me like an extension cord would fix that for good. 67.170.215.166 (talk) 00:59, 29 June 2010 (UTC)[reply]

On swallowing food items larger than one's own head...

[edit]

Just saw this on YouTube. How long do you reckon it takes to digest that? How is there even room in the digestive system to hold and move that through? --Kurt Shaped Box (talk) 19:38, 28 June 2010 (UTC)[reply]

"this", btw, is a kookaburra eating a rat. --Tagishsimon (talk) 21:43, 28 June 2010 (UTC)[reply]
Don't know much about kookaburras, but I know a different species which can do that, too. --Ouro (blah blah) 05:33, 29 June 2010 (UTC)[reply]

NFPA 704

[edit]

Why does sodium bicarbonate have a 1 in toxicity? People eat it all the time with no ill effects. --75.25.103.109 (talk) 19:59, 28 June 2010 (UTC)[reply]

I think that the solid can cause ill effects by reaction with gastric acid. --Chemicalinterest (talk) 20:03, 28 June 2010 (UTC)[reply]
Looking at the International Chemical Safety Card here, it could be because the powder is irritating to the eyes. Physchim62 (talk) 20:20, 28 June 2010 (UTC)[reply]
I don't know if this is related and it's been a while and I can't find any sources but I believe there are or were some regulations for labs which also oddly enough end up covering sodium chloride in NZ which impose some requirements or restrictions if you store it in large quantities that relate to the fact as with many things it is obviously toxic if you eat too much and while the level is fairly high (I think I've heard 500g), it's not considered high enough that you don't have to worry about it. This has unsurprisingly sometimes the subject of ridicule since someone in a fish and chips shop has no such requirements surrounding how they store their sodium chloride. Also from NFPA 704 "Exposure would cause irritation with only minor residual injury (e.g., acetone)" it doesn't sound like eating is the only consideration here as Physhim mention you don't generally want it on your eyes and for that matter open wounds or whatever. Nil Einne (talk) 11:40, 29 June 2010 (UTC)[reply]

evolutionary history of apoptosis and vertebrate immune systems

[edit]

What was the precursor to the vertebrate immune system and its diversity? What did the very first white blood cells look like? (They seem a bit renegade...did they evolve from cells that had a little of a rebellious streak?) Also what were the predecessors to apoptosis? John Riemann Soong (talk) 21:38, 28 June 2010 (UTC)[reply]

Lot of questions - I'll try my best! Firstly, have you read Immune_system#Other_mechanisms? I'm not sure about the evolution of white blood cells, this might help though or other papers here. This discusses the evolution of apoptosis. Interestingly there are some analogies between plant and animal immune responses - systemin activates MAPKs in turn releasing jasmonic acid from the membrane, a similar system causes the production of prostaglandins in the mammalian inflammatory response (see this). Although systemin is only found in the Solanaceae, similar peptides have been found in Arabidopsis. 86.7.19.159 (talk) 23:37, 28 June 2010 (UTC)[reply]

anywhere I can buy movement protein?

[edit]

I'm also using more professional sources, but I thought I'd try for a quick answer here...does anyone know if tobacco mosaic virus movement protein will work on garlic or onion cells (or similar edible rootlike tissues) and where I can buy it? John Riemann Soong (talk) 21:41, 28 June 2010 (UTC)[reply]

This paper discusses alfalfa mosaic virus movement proteins moving through the plasmodesmata of onion epidermis cells. I take it that this collection in Wageningen would have it, it would cost $75, or more likely be free. 86.7.19.159 (talk) 22:46, 28 June 2010 (UTC)[reply]
Onion cells ... exactly what we are using. (It's why I brought my potato peeler and various random vegetables to work today... I love this project.) THANKS. John Riemann Soong (talk) 22:49, 28 June 2010 (UTC)[reply]
No worries, this might be a better option, now that I've noticed you're in the US. 86.7.19.159 (talk) 23:41, 28 June 2010 (UTC)[reply]
Uhh can't seem to find a specific entry or price. My supervisor recommended me some large protein company, but I can't seem to remember the name. John Riemann Soong (talk) 16:17, 29 June 2010 (UTC)[reply]

Ancient lead

[edit]

This story says that nuclear physicists are stoked that a cargo of lead ingots was found on the floor of the Mediterranean after having lain there for 2000 years, because almost all of the lead-210 has decayed by now, so they're going to use it to shield the "CUORE" neutrino experiment (our redlink article from the "Cuore" disambig page is Cryogenic Underground Observatory for Rare Events) and are happy to have some lead for this purpose that isn't going to emit any radioactivity. Why is this lead particularly different from any random amount of lead ore that was mined yesterday? Is the problem that metallurgists are unable to get "pure" lead of some stable isotope, and some heavier elements are always in the lead and eventually decay into lead-210? Comet Tuttle (talk) 23:16, 28 June 2010 (UTC)[reply]

Yes, that is the problem. It is very difficult to separate different isotopes of the same element because you have to use physics rather than chemistry. People 2000 years ago were able to purify their ore into nearly pure lead and all the radioactive isotopes eventually decayed away. We can also get pure lead, but it will contain lots of isotopes, and we can't easily remove the radioactive ones (which wouldn't have been lead 2000 years ago, which is why people then could remove it). --Tango (talk) 23:39, 28 June 2010 (UTC)[reply]
From this fact sheet from the Argonne National Laboratory, the culprit appears to be radon-222, which is a decay product of uranium-238. Rn-222 has a long enough half-life to spread small amounts of lead-210 (and hence polonium-210) quite widely in the environment, so I would imagine there would be a particular seeding of deposits of lead ore (often associated with uranium-containing minerals). Physchim62 (talk) 23:44, 28 June 2010 (UTC)[reply]
I've just noticed that the reader comments to the Physics World article give the same suggested answer (radon-222). Physchim62 (talk) 23:52, 28 June 2010 (UTC)[reply]
Thank you! Comet Tuttle (talk) 05:27, 29 June 2010 (UTC)[reply]
Ref Desk in the past has discussed how scientists also like to use steel from pre-1945 battleships as shielding, since after the first atomic bomb detonations, refined iron somehow contains works less well as shielding. It is surprising that in an era before modern chemistry or metallurgy they were able to refine "pure" lead, when all they had was empirical rules of thumb and superstition as guides, with no real way to chemically analyze the result. Edison (talk) 17:35, 29 June 2010 (UTC)[reply]
I have the same idea about how savages, such as chefs at four-star restaurants, can cook at all, let alone well, given that most of them don't have even a grade-school, or nineteenth-century understanding of the chemistry involved. 92.230.66.154 (talk) 20:54, 29 June 2010 (UTC)[reply]
Irrelevant. Do the cooks there do a lot of lead refining? Remind me to stay out of that restaurant.Edison (talk) 19:48, 30 June 2010 (UTC)[reply]
Galena is relatively easy to smelt compared with most metal ores. Unlike most sulfide ores, you can get the sulfide itself to act as the reducing agent under the right conditions. As for purity, it would be interesting to see what sort of purity they managed; but they would have been aiming for particular metallugical properties, not chemical purity as we understand it today. Physchim62 (talk) 22:36, 29 June 2010 (UTC)[reply]