Jump to content

Wikipedia:Reference desk/Archives/Science/2015 June 3

From Wikipedia, the free encyclopedia
Science desk
< June 2 << May | June | Jul >> June 4 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 3

[edit]

equivalent resistance

[edit]

What is the equivalent resistance of the circuit neglecting internal resistance of cell? All resistance are of 10 ohm How to calculate it? I have tried to check Wheatstone bridge condition but since the ratio of resistance on either side is not equal it is not applicable. So how to calculate it? AmRit GhiMire "Ranjit" 16:21, 3 June 2015 (UTC)[reply]

I would call this circuit "just barely" above trivial: I can't solve it in my head by inspection. So, the next step in my toolbox is to perform nodal analysis and solve some simultaneous equations to determine the equivalent resistance (as measured at what I assume you intend to be the input ports, between the voltage source's positive terminal and ground). Perhaps a different volunteer editor will spot some "trick" to simplify the circuit that I missed. In any case, working out these kinds of problems is important, because even though we can do it for you (or make a computer do it), you need to know how it works. Nimur (talk) 16:32, 3 June 2015 (UTC)[reply]
I used the nodal analysis with Kirchoffs law .But what I obtained is just the ratios between current flowing through resistance. I have completed my school .So It is not the case of WP:DYOH . AmRit GhiMire "Ranjit" 16:41, 3 June 2015 (UTC)[reply]
It still might help if you show us some of your work; that way someone might be able to spot where you went wrong. SemanticMantis (talk) 16:58, 3 June 2015 (UTC)[reply]

Alright, I've never done one of these so anyone using my answer for homework is at Significant Risk. But trying laying this out as:

1 X 3
  5
2 Y 4

with R = 10 ohm

I (total) = I1 + I2 = I3 + I4. Defining I5 to keep it positive, we expect I1 = I3 + I5 and I2 + I5 = I4. By symmetry we expect Y = 1 - X, I3 = I2, I4 = I1. So I1 = I2 + I5; I5 = I1 - I2. Then I + I5 = 2I1.

Now R * I1 = 1 - X 2R * I2 = X R * I5 = 2X - 1.

But R * I5 = R * (I1 - I2) = (1 - X) - (X/2) = 1 - 3 X/2. So 2X - 1 = 1 - 3 X/2. So 7 X / 2 = 2; X = 4/7 volts. Y then is 3/7 volts. The current I5 then is 1/7 volt / 10 ohm = 1/70 ampere. The current I2 is 4/7 volt / 20 ohm = 2/70 ampere. The current I1 is 3/7 volt / 10 ohm = 3/70 ampere. Total current I is 5/70 ampere. So the total resistance is 1 volt / (5/70 ampere) = 14 ohm. Wnt (talk) 18:07, 3 June 2015 (UTC)[reply]

Try a star-delta transform --catslash (talk) 18:53, 3 June 2015 (UTC)[reply]
This is definitely an easier calculation to make, though I really ought to have drawn a picture: choosing the right side arbitrarily (the left would be the same) the right loop can be replaced with 10*10/40 = 2.5 and 10*20/40 = 5 ohm resistors. The 2.5 ohm resistor is placed opposite the original 20 ohm resistor, on the lower path, and then a 5 ohm resistor is placed on the upper path and a 5 ohm resistor is placed at the end of the circuit before ground. Note that the same substitution cannot then be made at the left side because the bridge resistor has already been eliminated; you're now finding the resistance in parallel of 10 + 5 = 15 ohm and 20 + 2.5 = 22.5 ohm, which works out to 9 ohms, added up to the 5 ohms next to ground to a total of 14 ohm. Knowing the voltage is 1 V, this gives the 5/70 ampere total current as above. Figuring out the individual currents after you've rewritten the circuit isn't that simple, but at the left end you haven't done so - you know that the upper and lower paths have a 3:2 ratio of resistance, and so you can assign the currents to them, and once you know the currents you know the voltages of the actual nodes. Wnt (talk) 11:53, 4 June 2015 (UTC)[reply]
I had in mind replacing the 10Ω, 10Ω, 20Ω star centred on the top middle node with a 50Ω, 50Ω, 25Ω delta, and then calculating (((10+10) || 25) + (10 || 50)) || 50 = 14 --catslash (talk) 13:09, 4 June 2015 (UTC)[reply]
In real life (as opposed to for homework), it's advisable to check whether your answer is plausible. In this case one check is to consider alternately removing and shorting out the middle resistor. Removing a current path can never decrease the overall resistance, and adding a new current path can never increase it - so immediately you know 15 Ω > R > 13.333...Ω, which is consistent with R = 14Ω (which is correct). --catslash (talk) 19:33, 3 June 2015 (UTC)[reply]
Re networks of equal resistors, a classic puzzle that might interest you is the cube network. The cube has a circuit node at each of its 8 vertices and an equal resistor (R say) along each of its 12 edges; what is the resistance between any two opposite corners (i.e. along a '3D diagonal' of the cube)? There's a trick that allows you to answer using only easy mental arithmetic. --catslash (talk) 13:32, 4 June 2015 (UTC)[reply]
This isn't a place for spoiler alerts, so I'll just say I assume the trick is to short equivalent nodes, thereby reducing it to parallel resistors and a total value of 5/6 R, I think. Wnt (talk) 18:40, 4 June 2015 (UTC)[reply]
Exactly so. --catslash (talk) 11:45, 5 June 2015 (UTC)[reply]

Electron's lifetime

[edit]

From Electron#Fundamental_properties: If "the electron is thought to be stable on theoretical grounds: [it] is the least massive particle with non-zero electric charge, so its decay would violate charge conservation."

So, how can it be that electron's "mean lifetime is 4.6×1026 years"? Why aren't electrons immortal? What happens after these 4.6×1026 years? --Llaanngg (talk) 21:23, 3 June 2015 (UTC)[reply]

The article says "The experimental lower bound for the electron's mean lifetime is 4.6×1026 years [...]" (emphasis added). In other words, the experiments suggest that the lifetime is at least 4.6×1026 years, which is consistent with the theoretical prediction that it's ∞. -- BenRG (talk) 21:50, 3 June 2015 (UTC)[reply]
(edit conflict) It says "The experimental lower bound for the electron's mean lifetime is 4.6×1026 years, at a 90% confidence level". Here "lower bound" means that if electrons decay then it appears it must take more than 4.6×1026 years on average, because if it took less then some decays would probably have been detected already in the huge number of examined electrons. No decay has actually been detected and it's possible they never occur. 4.6×1026 years is merely the smallest value some physicists currently think could have gone undetected. More experiments may raise that value. If electrons do in fact decay then I don't think there is currently any particular reason to think the mean lifetime would actually be near 4.6×1026 years. It could be any number above that. PrimeHunter (talk) 22:00, 3 June 2015 (UTC)[reply]
You have to be careful here. That particular lifetime is for a decay into a photon and neutrino (if that's how it decays, it would take at least that long). But if it actually decayed in a different way (3 neutrinos) then the data is not as good, and we only know it would take more than 4.624 years. And if it decayed in some way we have not imagined at all then the age is different yet again (and depends on if it just vanishes, or leaves some sort of mini-electron behind). So you have be careful when just saying in a blanket way "the electron lifetime is more that x". Ariel. (talk) 23:20, 3 June 2015 (UTC)[reply]
(edit conflict) I hate to muddy the waters, but 90% confidence is pretty uncertain by the standards of classical physics, and even by the standards of particle physics. As the IOP explains on their informational page, What does the 5 sigma mean?, particle physicists expect 5 sigma (99.9999% confidence) before they claim a result is "significant."
It would be interesting to learn what parameter for the half-life of the electron would instill the 5-sigma confidence level that particle physicists typically expect for publication. The reference for the "4.6×1026 year" value, the Electron Data Book from Lawrence Berkeley National Laboratory (last updated in 2012 in IOP's peer-reviewed PRD journal) cites several older studies, and explains why they use mean life (instead of half life), and why confidence is so difficult to establish. The primary experimental data that informs these parameters is the lack of an observation to contradict it.
Here is some further reading: Tests of electric charge conservation and the Pauli principle (1989) (available at no cost); and Review of Particle Properties (PRD, 1992), available by subscription only, with a complete note on confidence intervals. Nimur (talk) 23:31, 3 June 2015 (UTC)[reply]
The reason such a number exists is that, based on the current understanding, if it were less than that number, we'd have seen some experimental proof of that lower number. Notably, the universe itself is only 1.4 x 1010 years old, which means the universe would have to be over 100 trillion times as old just to check to see if the prediction of the lower bound is correct. Or, as non-physicists would say "no friggin way". Physicists don't speak in absolute terms, but those of us who aren't physicists shouldn't feel so wishy-washy. Near as we can tell, electrons do not decay. That's part of the reason why we call them Elementary particles. --Jayron32 02:03, 4 June 2015 (UTC)[reply]
Being elementary is unrelated to being stable (= not decaying = having an infinite half life). The lightest electrically charged particle happens to be elementary, but it would be stable by the same argument if it were composite. Most of the elementary particles of the Standard Model (that can exist as free particles) are unstable. In general, particles are unstable unless they're protected by some conservation law (like charge conservation), and there aren't many of those to go around. -- BenRG (talk) 03:17, 4 June 2015 (UTC)[reply]
Fair enough. Please also correct my notion that electrons are essentially stable, except for pedants for whom "at least 1026 years" is not enough of a synonym for "forever". --Jayron32 03:59, 4 June 2015 (UTC)[reply]
...which means the universe would have to be over 100 trillion times as old just to check to see if the prediction of the lower bound is correct Not really. For example, if the mean lifetime were 4.6×1026 years and we had the required experimental capability, we would detect a decay in a kilo-mole of hydrogen in less than an year. The catch of course is the "required experimental capability" to detect a single electron decay, but in principle the lower bound can be refuted or bettered within minutes or even seconds (assuming the decay is a Poisson process). Abecedare (talk) 04:25, 4 June 2015 (UTC)[reply]
Waiting a kilomole of years for any of them to decay is not the same as expecting one out of a kilomole of particles to decay each year. --Jayron32 04:31, 4 June 2015 (UTC)[reply]
It is, if we are talking about exponential decay. That is how half-lives of radioactive isotopes with values of billions and trillions of years have been determined experimentally within roughly a century of discovery of radioactivity. It is not, if one is imagining (say) each electron living a fixed lifetime and then suddenly decaying. I haven't checked the literature for the hypothesized electron decay process, but am assuming it is closer to the former than the latter. Stand ready to be corrected. Abecedare (talk) 04:49, 4 June 2015 (UTC)[reply]
This discussion of the numbers perhaps detracts from the more fundamental issues at play. This forum thread discusses the issue of the 1026 number and notes it isn't the result of experimental kinetics studies, but rather of computer modeling of proposed decay mechanisms, and thus represents a best-guess lower-bound of the models, not any sort of experimentally determined decay kinetic study. If we really want a good, easy to understand explanation (including such fundamental concepts as Noether's Theorem and charge conservation) this discussion explains why electron decay should be impossible under our current understanding. The basic principle in this case is that, for the electron to be able to decay, there would need to be a lighter decay product which could carry away the electron's charge. The sum of masses of decay products must be lighter than the mass of the initial particle, because of the second law of thermodynamics: a spontaneous process cannot produce particles of less kinetic energy than the starting particle; if it did so it would mean it would have had to have converted some of that kinetic energy to potential energy spontaneously, which is functionally equivalent to heat spontaneously flowing from cold to hot. No. That means that product particles in spontaneous decay processes must have a lower mass than the initial particle. What product particle, which has less mass than the electron, is available to carry away the electron's charge? If you can figure that out, the King of Sweden has a nice medal he'd like to hang around your neck... --Jayron32 05:31, 4 June 2015 (UTC)[reply]
Hmm. There are aspects of that answer that don't seem to be up to your usual standard on scientific matters, Jayron. The second law is a statistical phenomenon — it doesn't forbid anything from happening; just says that some things are unlikely (perhaps sometimes so unlikely that we wouldn't expect to see them even once within the observable universe during its useful lifetime, but still just unlikely, not forbidden). It doesn't really apply to one-off events like a posited electron decay, and if it did, the connection with kinetic and potential energy is still obscure. Besides which, kinetic energy can easily turn into potential energy — happens every time you throw a ball upwards.
The stricture about the masses of the decay products seems a lot simpler and not at all statistical — I think it's just billiard-ball physics, the need to preserve energy and momentum simultaneously.
You might have some unfortunate sociopolitical ideas, but you usually seem to know what you're talking about when it comes to science, so I await your clarification. --Trovatore (talk) 05:57, 4 June 2015 (UTC)[reply]
Balls flying upwards are not spontaneous processes. The second law only states that spontaneous processes cannot work that way, unless they somehow compensate by having another process make up the difference; throwing a ball upwards requires an expense of potential energy somewhere else, in the case of me throwing the ball, I would have to heat up the air with the heat of my muscles by burning some food energy, thus maintaining the basic tenets of the second law. The second law is not an approximation or a nice statistical average, it is a rigorous law which has never been shown to be violated, except by people who fail to fully analyze the situation. In the case of particle decay, there is a difference between spontaneous particle decay (that is, leave it alone and it just happens) and induced particle decay (that is, smash the particle with another particle, introducing a whole lot of kinetic energy in the process). You can get all sorts of interesting stuff by smashing particles with other particles. But by definition, that process is not spontaneous. The second law always wins. --Jayron32 06:17, 4 June 2015 (UTC)[reply]
Just for the sake of rigor as well, Wikipedia has an article titled spontaneous process since you seem unfamiliar with the term. --Jayron32 06:27, 4 June 2015 (UTC)[reply]
Well, this is not my understanding of the second law, but then things get interpreted a lot of ways distinct from my understanding, so I'm going to give you a chance to explain further. But first I'll say a few words about how I do understand it, to perhaps focus the discussion.
As I understand it, thermodynamics in general is a statistical discipline. In fact I learned it from the point of view of quantum statistical mechanics. Almost by definition, thermodynamics is applicable only to large statistical ensembles. There is no such thing, for example, as the "temperature" of a single particle, because heat is specifically about random energy, and a single elementary particle really can't have that.
Moreover, the second law absolutely can be violated, albeit with probabilities you most likely consider "effectively" equal to zero (but they are still not really equal to zero). The cream can spontaneously (yes, really spontaneously) separate out of the coffee. At a quick guess, a lower bound for the probability that I'm pretty sure I can defend later is , which is certainly very close to zero, but which is absolutely positively not zero. In fact, given enough trials, it will almost definitely happen — see infinite monkey theorem.
But those are somewhat side issues. Maybe you really do have a way of looking at things whereby the second law is applicable to showing that a particle cannot decay into a more massive particle. As yet, however, you have not made clear what that way is, certainly not to readers who have learned about thermo in a similar way that I have, as explained above. Will you clarify? --Trovatore (talk) 00:09, 5 June 2015 (UTC)[reply]
Fine then, look at it this way. Potential energy is created whenever an object is moved against a force (in the opposing direction). Potential energy turns into kinetic energy when a particle moves with a force (in the same direction). By fundemental definition, a spontaneous process must be one that converts potential energy to kinetic energy, and thus the reverse process must be anti-spontaneous. The second law of thermodynamics is merely taking this fundamental definition, and applying it to large numbers of particles. It is still a basic principle that kinetic energy does not turn into potential energy spontaneously in an isolated system; if it does, the system isn't isolated and there must be a commensurate such conversion somewhere else so that the net entropy of the universe is still not decreasing. That means, in any spontaneous decay event, the product particles have to have more kinetic energy than the initial particle, because if they had less kinetic energy, the lost kinetic energy would have been turned into potential energy, which by definition, cannot be spontaneous, QED. --Jayron32 00:26, 5 June 2015 (UTC)[reply]
I get the feeling you may be using some of these words in a way with which I am not familiar. Let's take a baby example, and you can walk me through it and say how your analysis applies.
Let's suppose that the whole universe consisted of just the Sun and the Earth, more or less as they are now, except that we're going to ignore solar wind, insolation, etc, basically assume that they're rigid, unchanging bodies that interact only by gravity, and don't dissipate any heat as a result of tides. But we'll keep the fact that the orbit is elliptical — in fact, let's imagine that it's really elliptical, with perihelion at 0.1 AU and aphelion at 10 AU, just for fun.
OK, what happens in the course of a year? At aphelion the Earth is moving slowly, has low kinetic energy, but as it falls in towards the Sun, some of its potential energy turns into kinetic energy and it speeds up. Then, from perihelion to aphelion, the reverse is occurring — some of the kinetic energy becomes potential energy; the Earth climbs part-way out of the Sun's gravity well and slows down.
Now, what part of that is "spontaneous" or "anti-spontaneous"? It all looks spontaneous to me. Is there an applicable notion of entropy according to which entropy increases from aphelion to perihelion and then diminishes in the other half of the year, and if so, what is it? (Doesn't look like Shannon entropy or von Neumann entropy, for example, as far as I can tell.) --Trovatore (talk) 00:42, 5 June 2015 (UTC)[reply]
You're describing the earth's motion as a harmonic oscillator. It's fundamentally no different than a frictionless pendulum; once set in motion it will continue its oscillation essentially forever. Because as an isolated system, it is not losing or gaining energy. Still, if no outside force sets the pendulum in motion, it remains stubbornly hanging down, and will not spontaneously start swinging. But if we start observing the frictionless pendulum after it has already been set in motion, it will likewise continue swinging forever. Back to the earth-sun problem. Ignoring the rest of the universe for a minute, if no outside force acts on the earth-sun system, it will go on forever. Because the system is in a steady state, it isn't undergoing any "process". It's just existing in its steady state. --Jayron32 01:02, 5 June 2015 (UTC)[reply]
OK, I read that, and unless I made some silly mistake, it's all fine, with a possible quibble about what you mean by "process". So fine, I agree with what you've said about that system. Do you agree with what I've said about it, specifically about it trading potential for kinetic energy at some times, and kinetic for potential at others? If not, then in what way? If so, then how do you defend the claim that kinetic energy never spontaneously turns into potential energy? --Trovatore (talk) 02:07, 5 June 2015 (UTC)[reply]
As you and others have said (somewhere above) electron decay would mean either: (a) existence of a lighter negatively charged particle, or (b) violation of charge conservation and either propositions are very unlikely and almost guarantee a Nobel prize to any one (group) who proves either. That is the whole reason physicists are interested in running experiments to detect electron decay. It is low probability, high risk, high reward research conducted by completely legitimate groups of physicists (because it is infeasible for anyone else to do so, and cranks would rather violate conservation of energy or direction of time laws).
For example, see this paper that established the 4.6×1026 years upper bound. The experiment looks for decay among N=1.36×1030 electrons over a period of T=32.1 days. By upper-bounding the excess radiation they detected (ie after subtracting off the the expected radiation from other known processes), they are able upper bound the number of electron decays (if any!), and thus lower bound the mean lifetime. Pretty simple in principle. Abecedare (talk) 12:13, 4 June 2015 (UTC)[reply]
You linked to a Physics Forums thread where someone named Morgoth said that the 1026 year bound comes from theoretical models, not experiment. The Wikipedia article says it's experimental and cites a paper describing the experiment, so you should be able to figure out that it's more likely to be right than Morgoth.
A free particle decaying into particles with a larger total mass is forbidden by energy conservation, which is the first law of thermodynamics, not the second.
Particles with a finite half life are equally likely to decay at any moment (it's a memoryless process). Anything else would require some sort of internal state (an "age"), which would make them distinguishable by its effect on their decay probability, which is inconsistent with quantum statistics. -- BenRG (talk) 20:48, 4 June 2015 (UTC)[reply]
If that figure is correct, it would make the minimum expected lifespan of an electron to be something like the cube of the current estimated age of the universe. That may not be literally immortal, but for practical purposes it would do. ←Baseball Bugs What's up, Doc? carrots07:26, 4 June 2015 (UTC)[reply]
I get what you're saying, but that statement doesn't technically make sense. The cube of a time period has no physical meaning, and could be expressed in s3 or year3 etc. Just like it makes no sense to say that 100 m > 13 m2 or 3 feet < 5 gallons, you can't compare years with 'cubic years'. - Lindert (talk) 12:39, 4 June 2015 (UTC)[reply]
X to the third power is the cube of X. I don't mean that time is cube-shaped. ←Baseball Bugs What's up, Doc? carrots18:36, 4 June 2015 (UTC)[reply]
(ec)The problem with that is that it's entirely dependent on the unit of time that you use, (and X to the third power does not have to be greater that X), which makes it a completely arbitrary comparison. If you take for the age of the universe X ≈ 109 (years), and for this half-life λ ~ 1027 (years), you might say that λ ≈ X3 or X3/λ ≈ 1. However, if you express both periods in seconds, you get X ≈ 4 * 10 17 (seconds) -> X3 ≈ 8 * 10 52, while λ ≈ 3 * 10 34 (seconds), so now λ << A3. If you use the Planck time as unit of time, the difference will be so great that λ is infinitely negligible compared to X3. By choosing which unit to use you can literally get any value you like for 'X3/λ', which is what I mean when I say that the comparison has no physical meaning. - Lindert (talk) 19:17, 4 June 2015 (UTC)[reply]
The cubing was just a semantic issue and I don't believe BB actually misunderstood that. But I think there is a more fundamental conceptual error being made. Some are interpreting mean life-time as: this is the amount of time each electron lives and since even the lower bound for that is much longer than the lifetime of the observable universe, it implies that no electron has decayed in the universe's lifetime. That is wrong!
A particle having mean life-time τ means that its probability of spontaneously decaying in the time-period T is T/τ (for T << τ). And with N particles, since each particle acts independently, the number of particles that decay in time-period T is NT/τ. And there are lot of electron around and N is humungous! For instance, Earth has roughly N=1050 electrons, so if the true mean lifetime of electron were τ= 1027 years, about 1023 electrons in Earth would decay every year, which is more than a quadrillion electrons decaying every second. Of course, currently accepted theory says that that number is exactly zero and not a single electron decay has been ever observed (as PrimeHunter and Nimur already mentioned) but don't be mislead by comparing the mean lifetime to lifetime of the universe. Abecedare (talk) 18:51, 4 June 2015 (UTC)[reply]
So not every electron is necessarily immortal, but the average electron effectively is? ←Baseball Bugs What's up, Doc? carrots19:36, 5 June 2015 (UTC)[reply]
@Trovatore: I am inclined (but not particularly qualified) to disagree about the temperature of an isolated atom or molecule. The Boltzmann constant appears to relate the speed and temperature of even a single atom. It is true, of course, that one atom without a specified frame of reference can't be given a speed; for a molecule that is not as true since the rotational degrees of freedom are expected to carry much of the temperature, though it would still involve a luck-of-the-draw factor. Nonetheless, there might be reasons why a frame can be assumed without an average population to average up and infer it from, in which case you could define a temperature. Wnt (talk) 23:14, 6 June 2015 (UTC)[reply]
Well, so first of all, you betray here a specifically "kinetic" notion of the concept of temperature. That conception is problematic in a number of ways, and is certainly not the one I learned in undergrad physics — I learned the one based on the reciprocal of the derivative of entropy with respect to internal energy, where entropy is defined as the logarithm of the possible number of quantum states. See Kittel & Kroemer, Thermal Physics, 1980.
But to avoid going too far afield, let's stick with the kinetic notion for the moment. To make that work, you have to do some bookkeeping magic with regard to how you count rotational and vibrational modes, but we can let that slide. The directly relevant point is that it's only the random component of the kinetic energy that counts. Kinetic energy associated with predictable motion is not part of heat; it's just part of the motion of the object.
So if you freeze a baseball to 5 K and then throw it in a perfect vacuum at a speed of 0.999c, what's its temperature? Still 5 K (except to the extent that your throwing motion induced vibrations which then got dissipated as heat).
If you want the same point in an inertial reference frame, take a rigid ball, freeze it to 5 K, and then set a spin on it to the point that the rotational speed of the atoms on the outside of the ball dominates the thermal speed of those atoms. (That would probably have to be pretty fast and the ball might disintegrate, but that's not a problem in principle; maybe make it a very cold neutron star so the gravity holds it together.) What's its temperature? Still 5 K. The component of the kinetic energy of those atoms that comes from the ball's rotation is not random, so it doesn't contribute to the temperature. --Trovatore (talk) 23:37, 6 June 2015 (UTC)[reply]
@Trovatore: The entropy of an individual particle seems difficult to define, but I'm not sure it's impossible. Consider that when you look at a particle such that in your frame it is moving at speed v, the momentum is mv but the kinetic energy is 1/2 mv2. But the difference between distinguishable momentum states is limited by the Heisenberg principle. So the faster you consider it to be going, the greater the range of kinetic energy it might have, given a certain observable momentum, or considering a minimal measurable increase in momentum.
Whatever the temperature of a baseball is when considered as a baseball, it should be clear that if it's raining that kind of fastballs, and you're not looking at the moment-to-moment changes in velocity something experiences when hit by them, you know that thing is going to be very hot. So I'm thinking that the difference between 'temperature' and 'velocity' should be one of perspective - whether you think of something as a random collision hazard or an independent object. I should add that I would think, but don't actually know, that Brownian motion should be a component of temperature - that the speed of the motion should follow the Boltzmann relation and that the impacts of such semi-macroscopic objects should be needed for a solution to produce as much energy at its edges as would be expected. I could be going far astray here, so let me know. :) Wnt (talk) 12:07, 7 June 2015 (UTC)[reply]
OK, look, I'm not a real expert in any of this. I challenged Jayron because I didn't see (and still don't) what thermo is supposed to have to do with the question about electron decay. But I'm going off old memories of a class aimed at Caltech sophomores, so take it with a grain of salt.
That said, I'll take a shot at your raining-frozen-fastballs scenario. My understanding is that if the balls are bouncing around randomly, then yes, you can consider that energy to be in some sense heat, even though it's not at equilibrium.
Sometimes it is possible for the same physical object to have different systems that are in some sort of rough equilibrium within that system, but not in equilibrium with each other. For example, you can consider a system of atoms that have a magnetic dipole moment in a magnetic field, so that their energy is different depending on their orientation. That system may even have a negative temperature, although the overall temperature of the object cannot be negative.
However, if the balls are all coming straight at you in the same direction, then no, that is absolutely positively 100% not heat. It can be expected to turn into heat soon, but it isn't now. Heat requires randomness.
Some of these answers may seem a little unsatisfactory, to me as well as to you. Do some of the answers depend on semi-arbitrary bookkeeping choices? It sort of seems that way. Do you have to have a position on the philosophical foundations of probability before you can understand randomness, and therefore before you can understand thermo? Maybe. Maybe it depends on what you mean by "understand". Not sure. What if you have motion that isn't really random, but just pseudo-random? Don't know.
No doubt there are people here who are more informed on these things than I, or have thought about them more, or both. I would be interested to hear their thoughts. --Trovatore (talk) 19:33, 7 June 2015 (UTC)[reply]