Jump to content

User talk:RonCram/AGWControversySandbox

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Developing a new outline

[edit]

In my view, the article should be organized into an outline that discusses these five issues. Regarding the satellite data, it is possible it fits better under point #2 rather than #1. If I have left out any other main points, let's discuss them. RonCram 18:54, 19 February 2007 (UTC)[reply]

Importance of archiving data, methods and code

[edit]

The policy of Science magazine is here. [1] All of the science journals have a similar policy AFAIK. The code is important because it is very difficult to reproduce someone's result without it. How is one to know what kind of "fix" is in the code? How can scientists evaluate the "flux adjustments" or "parameterizations" or other factors designed to make the simulation look like it is related to the physical world? What if scientists want to apply the same code to a different data set? The US government has many agencies that fund research, a total of some $2 billion going into climate research each year. You should read the data archiving policy of NSF. [2] [3] In it you will find the NSF expects researchers to make their code available to other scientists. So you see, my comment is not from the world of Richard Stallman, but of Karl Popper. As I said before, any researcher who is not archive his data, methods and even his code is not doing science but pseudo-science. The climate journals and other accomplices in this crime against science should be ashamed of themselves. RonCram 17:50, 3 March 2007 (UTC)[reply]

A question

[edit]

Nice, can we edit the page? Point two interests me.. kind regards, sbandrews 17:37, 2 March 2007 (UTC)[reply]

sbandrews, yes you can edit the page. I appreciate the input.RonCram 17:48, 2 March 2007 (UTC)[reply]

cool, I'm gonna start with some fact tags - not that I doubt, just 'cos I don't know the sources... sbandrews 18:01, 2 March 2007 (UTC)[reply]

It seems slightly strange to edit someone else's thought, but here goes... sbandrews 20:49, 2 March 2007 (UTC)[reply]

I noticed your edit, however it is incorrect. The data and methods have not been made available. Warwick Hughes requested the data once and was told no as Steve McIntyre testified before Congress in the link I provided. Later Willis Eschenbach filed a Freedom of Information Act request to get the data and methods and was denied. I believe they are appealing the decision. After that a lawsuit will be filed. If you feel the data is available, please provide the link.RonCram 20:56, 2 March 2007 (UTC)[reply]

ok, may take a while - feel free to put a fact tag anywhere you disagree.. sbandrews 21:25, 2 March 2007 (UTC)[reply]

WMC comments

[edit]

Some comments. First of all, is this to be about the *scientific* controversy or the public one? It is bad to mix them indiscriminately. If it is scientific, then the bulk of the references should be to scientific literature. If the public, then to newspapers and speeches. Blog postings and private web pages shouldn't be in a majority.

Second, is it about the current (ar4) state of debate, or as it was a few years ago?

You have a section entitled Is the IPCC fairly reflecting the science? but have managed to find nothing positive to say. This isn't good.

You have a large collection of UHI links. But we have a perfectly good UHI page (if you don't think its good, improve it. Making this page into a POV fork of it won't work). If those links are valuable, add them there. If they aren't... don't use them here.

Inevitably, somewhat negative. I doubt this will work. But you've only just started so I should give you more time to work on it William M. Connolley 17:39, 2 March 2007 (UTC)[reply]

William, I do not intend to establish a POV fork on UHI page. I would like to improve the UHI page but have not had time. I think the UHI question should be discussed (or introduced) here but there should also be a link to the UHI page. I think you will find that the majority of the links will be discussing the comments of scientists, some from the literature and some from newspapers. Blog postings will mainly be from scientists involve in the controversy - RC and CA. BTW William, I do want the article to fairly reflect the controversy. I'm not trying to censor the majority at all. If you can find rebuttals to the links I am providing, please link them in - or any other info you think is appropriate.RonCram 17:48, 2 March 2007 (UTC)[reply]

My view

[edit]

In my opinion, the first discussion should be about the issues regarding the reliability of the models and the limits of predictability. Then the predictions themselves could be discussed. There is plenty of material (Orrin Pilkey (a must-read according to Oreskes), Hendrik Tennekes, David Orrell, to name a few) that is hardly addressed so far. --Childhood's End 19:00, 2 March 2007 (UTC)[reply]

View of energy being stored in the oceans

[edit]

Comments on Hansen's "robot" study by Fred Singer (UofVa, etc.):

"The idea that energy can be stored and linger in the oceans and can later raise temperatures makes no physical sense," Mr. Singer said. "It violates the law of thermal dynamics and is not tenable. I think it is erroneous and should be corrected." (http://www.washingtontimes.com/national/20050428-115517-5464r.htm)

How is "consensus science" done?

[edit]

The following is a post by Mike Carney on ClimateAudit.org. [4] His tongue is firmly in cheek. If you don't see the humor, you cannot pretend you are a scientist.

I can’t resist adding my favorite “consensus science”:
Correct answers can be determined by whether other studies got the same answer. That’s consensus. Examining the methods and underlying data can be destructive of consensus. Remember, replication of methods is not necessary. Only replication of results is necessary.
Corollaries:
Any errors found in methodology or data sets are not important unless the finder can produce their own thesis to solve the problem undertaken by the original author. There are unfortunately, fields like accounting that labor under the impression that “auditing” is valid process. The only valid process is producing answers.
Actually having available the data sets or the methodology used to produce a paper is unimportant. Anyone who asks for such information is clearly not a scientist in any meaningful sense.
Since the consensus is well known, data sets and methods can be properly cleaned of outliers. This is sometimes referred to as cherry picking. Also see How its done.[5]
Answers that result in quarter million dollar gifts or a just a multi-million dollar grant are better than answers that don’t. Like politicians, self interest has no affect on the answers produced by scientists. - Mike Carney

I love this. Thank you, Mike! RonCram 15:43, 14 April 2007 (UTC)[reply]

"We should not have taken Michael Mann's word for it"

[edit]

Interesting post on ClimateAudit.org from Michael Mayson:

Top fifteen climate science reasons for not disclosing data or code - or preventing others from doing so!!
I have just read this :
http://www.techcentralstation.com/062404I.html
and Willie Soon has this comment about publishing his paper “Estimation and representation of long-term (>40 year) trends of Northern-Hemisphere-gridded surface temperature: A note of caution”.
He says “Actually the paper was not off to a good start, it was first scheduled to appear on January 27 after being approved by two reviewers and the editor in charge. But upon not seeing our paper appearing as planned and promised and after several rounds of my persistent inquiries, I found out that the publication of our paper was being delayed and the paper may not be printed because of a presumed copyright violation issue. The misunderstanding about and the attempt (indicated below) to prevent the publication of our paper is both alarming and sad but the raw fact is that I had obtained all the necessary copy-right permissions for the purpose of this academic research work prior to the submission of our scientific manuscript to the Geophysical Research Letters (GRL) around November 2003. The matter is now resolved and our paper was published two-and-a half weeks later. Upon my satisfactory resolution with the GRL production office by presenting all the proofs, the GRL production person I spoke to said: “We should not have taken Michael Mann’s word for it.” [6]

Thank you, Michael Mayson! RonCram 00:38, 15 April 2007 (UTC)[reply]

"Consensus" is not an ideal in the sociology of science

[edit]

Another very nice post in ClimateAudit.org. This comment, directed to Steve McIntyre, is by Francois Ouellette. I've edited it just a bit:

Steve,
In the 1940’s, the sociologist Robert K. Merton proposed that the scientists were following a certain set of norms, that these norms constituted what can be called the ethos of science, or at the very least “academic science”. Those norms are summarized under the acronym “CUDOS”. They are “Communalism”, “Universalism”, “Disinterestedness”, and “Organized skepticism” (later, others added “originality” for the “O”).
This so-called “functionalist” view of the sociology of science has been much criticized over the past 20-30 years, but in the end, you realize that much of the criticism amounts to saying that the scientists do not really follow these norms in practice, which is really but another way of saying that they should!
Now, you will notice that “consensus” is NOT one of the norms. To stay within the realm of sociology, one might say that the much talked about “consensus” is a “boundary object” between the scientists and the policy makers. It is useful to both, but can have a different meanings on both sides of the barrier. For the policy makers, the consensus is one way to avoid having to look at all the facts themselves, and to take for “true” what the scientists are telling them. For scientists, a consensus actually doesn’t mean much. it may be a state of affairs, just the fact that most scientists in a field mostly agree on something, but it never means that that something is “true”, or at least it shouldn’t. The enterprise of science is not about establishing a consensus, but about always being allowed to question such consensus, that is “organized skepticism”. THAT is the real norm of science. The consensus is something a scientist uses to take one of either paths: either go along with it and see what comes next (some would say that is the “normal science” of Thomas Kuhn), or, find out if there is anything wrong with it (which, in turn, could lead to a “revolution”, big or small). But both avenues are ALWAYS open to scientists. That is why the consensus in itself is of little value.
Now let’s come back to the norms. Communalism means sharing the information, e.g. in scientific publications. Clearly, we have a problem here, since, as you have shown, all the data are not properly disclosed. Universalism means that anyone’s contribution is welcome, independent of race, sex, and, of course, education, as long as the contribution makes a valid point. So it is unfair to say that your contribution is not valid because you are not an expert in the field. Are you making a valid point or not, should be the question to ask. “Disinterestedness” is another contentious point. Is Mann really disinterested when his own findings are heralded as the “icon” of global warming? Much is at stakes here for him, if he is proven wrong. “Interest” also means, not only money or prestige, but also political interests. I think everyone would agree that the global warming issue is very much politicized. As such, it is highly vulnerable to a breaching of the norm of disinterestedness. As for skepticism, that is really what it’s all about, I mean, auditing the results.
In short, what your blog is doing is trying to enforce the real scientific norms. In that sense, it is a much more valid endeavour than following or trying to establish an elusive “consensus”. The “real” scientist, here, is you, Steve. [7]

Thank you, Francois! RonCram 01:00, 15 April 2007 (UTC)[reply]

Interesting post by Willis Eschenbach

[edit]

KevinUK, you ask a good question:

OK I’ve continued to read the rest of this thread and the others related to it. This far it’s clear that there are lots or problems with this dataset that is used as the primaty proof of negligible UHI effects in the mean global surface temperature currently claimed warming trend. What other alternative studies exist which have looked into the UHI effect where there are large difference between rural and urban populations (Europe, US/Canada, USSR, India)?

The abstract of a recent study in Climate Research [CR 33:159-169(2007)] says:

Recent California climate variability: spatial and temporal patterns in temperature trends

Steve LaDochy1,*, Richard Medina1,3, William Patzert2 1Department of Geography & Urban Analysis, California State University, 5151 State University Drive, Los Angeles, California 90032, USA 2Jet Propulsion Laboratories, NASA, 4800 Oak Grove Drive, Pasadena, California 91109, USA 3Present address: Department of Geography, University of Utah, 260 South Central Campus Drive, Salt Lake City, Utah 84112-9155, USA

  • Email: sladoch@calstatela.edu

ABSTRACT: With mounting evidence that global warming is taking place, the cause of this warming has come under vigorous scrutiny. Recent studies have lead to a debate over what contributes the most to regional temperature changes. We investigated air temperature patterns in California from 1950 to 2000. Statistical analyses were used to test the significance of temperature trends in California subregions in an attempt to clarify the spatial and temporal patterns of the occurrence and intensities of warming.

Most regions showed a stronger increase in minimum temperatures than with mean and maximum temperatures. Areas of intensive urbanization showed the largest positive trends, while rural, non-agricultural regions showed the least warming. Strong correlations between temperatures and Pacific sea surface temperatures (SSTs) particularly Pacific Decadal Oscillation (PDO) values, also account for temperature variability throughout the state. The analysis of 331 state weather stations associated a number of factors with temperature trends, including urbanization, population, Pacific oceanic conditions and elevation. Using climatic division mean temperature trends, the state had an average warming of 0.99°C (1.79°F) over the 1950–2000 period, or 0.20°C (0.36°F) decade–1. Southern California had the highest rates of warming, while the NE Interior Basins division experienced cooling.

Large urban sites showed rates over twice those for the state, for the mean maximum temperatures, and over 5 times the state’s mean rate for the minimum temperatures. In comparison, irrigated cropland sites warmed about 0.13°C decade–1 annually, but near 0.40°C for summer and fall minima. Offshore Pacific SSTs warmed 0.09°C decade–1 for the study period.

Note that over the last 50 years, the increase in maximum temperatures was twice as large as the mean rate for large urban sites, and the increase in minimum temperatures was five times as large. This means that the mean temperature increase in urban areas was 3.5 times as large as the mean rate statewide. The mean rate statewide was 0.2°C/decade, so that means the urban sites averaged 0.7°C/decade … I’d call that significant UHI warming myself.

Unfortunately, the full report is extremely expensive (€80), so I think I’ll have to pass on it … there is also an interesting analysis of trends in California temperatures by county at Warwick Hughes’s site.

As a rough rule of thumb, Studies by Torok et al. (2001) and Oke (1973) indicate that the UHI effect is about 1.5°C * log(population/population0). The data on Warwick Hughes page, however, gives a slightly higher value of 1.8°C.

w.[8]

Thank you, Willis! RonCram 00:56, 16 April 2007 (UTC)[reply]

Interesting edit by Childhoodsend

[edit]

In February 2007, environmental consultant Madhav L. Khandekar, in reaction to the Oreskes survey, issued an annotated bibliography of 68 recent peer-reviewed papers which question aspects of the current state of global warming science. It is titled "Questioning the Global Warming Science: An annotated bibliography of recent peer-reviewed papers" [9] Thank you, Childhoodsend, for bringing this paper to the Global Warming article. I look forward to reading it. RonCram 16:00, 27 April 2007 (UTC)[reply]

Von Storch and Zorita comment on data withholding

[edit]

"Another important aspect was his insistence on free availability of data, for independent tests of (not only) important findings published in the literature. It is indeed a scandal that such important data sets, and their processing prior to analysis, is not open to independent scrutiny. The reluctance of institutions and journals to support such requests is disappointing." [10] See also "The Decay of the Hockey Stick" by Von Storch. [11]

Nature blog limits discussion

[edit]

While the blog posting by von Storch and Zorita credited McIntyre for insisting on free availability of data, it tried to take some credit that rightfully belonged to McIntyre and McKitrick. Nature invited Steve McIntyre to respond to the blog posting by von Storch and Zorita and then withdrew the invitation. [12] I also posted a response to von Storch and Zorita (before I knew they had invited McIntyre to respond), but Nature decided not to publish my post either. Here is what I wrote:

Dear Drs. von Storch and Zorita,
I was amazed to see your posting on “The Decay of the Hockey Stick” and your follow up posting on the contributions of McIntyre and McKitrick. In the first you claim your 2004 journal publication a “remarkable event” because the Hockey Stick was such an unassailable icon in the science community. However, the shine on the Hockey Stick had already been removed by the 2003 paper published by McI and McK in E&E.
You also claim that McI and McK were correct when they showed that Mann’s statistical method produces hockey stick shapes from trendless red noise but then you write “they would have a valid point in principle, but the critique would not matter in the case of the hockey-stick.” This is an extraordinary claim. Are you unaware that McI has published a response showing why it does matter in the case of the hockey stick? Since you have not publicly replied, I was forced to presume you had conceded the point. If you have not conceded the point, please do publish a response.
You also claim that McI and McK only looked at statistical problems with Hockey Stick paper and that you looked at methodical problems. Speaking only as an interested layman, I do not think your claim obtains. McI and McK criticized Mann’s selection of the bristlecone pine series, an issue you agree may be valid but have not researched yourselves. But you then go on to make the strange claim that no one else has researched the issue either. This is hardly accurate since the U.S. NRC published a report (at the request of Congress) agreeing that bristlecone pine series is not a temperature proxy and should not be used.
You also proclaim yourselves satisfied with what has been achieved – an open debate. I find this most surprising of all, because I am completely unsatisfied. The temperature reconstructions in 4AR all continue to use the bristlecone pine series.
Perhaps most unsatisfying to me is the lack of accountability in the science community. As you know, Dr. Mann withheld results of validation tests in a subdirectory called “BACKTO_1400-CENSORED” that were contrary to his conclusions. According to published reports, the folder showed Mann knew his method did not get a hockey stick shape without the bristlecone pine series. Despite this knowledge, Mann made claims that his method was robust and not dependent on any particular proxy. If the science community does not have any ways to insure honesty in science, what credibility does anyone’s work really have?RonCram 06:18, 16 May 2007 (UTC)[reply]

Origin of the term "Hockey Team"

[edit]

"Hockey Team" is the name a group of climatologists called themselves who are devoted to preserving the Hockey Stick. Here is information on how the name came into usage. [13] RonCram 06:56, 16 May 2007 (UTC)[reply]

Glacial retreat prior to 1850

[edit]

ClimateAudit has this post from Michael Jankowski on comment #25: [14]

As the 2001 IPCC TAR pointed-out explicitly (not sure if the AR4 did so, haven’t looked hard enough), these reconstructions should show significant warming prior to 1850 in order to correspond with widespread glacial retreat. However, most reconstructions, including the surface record, are several decades to a century (or more) behind. The TAR refers to this issue as “remains unresolved.” Gavin responded to me once with something along the lines of, “I don’t know what they mentioned it, it’s overblown.” Yet it seems important that there’s a physical record (glaciers) in direct conflict.
Of the above, Moberg’s seems to the one which may show warming in a time appropriate with the onset of glacial retreat…interestingly enough, his does show some extreme warmth in the area of the MWP, the coolest valleys of the LIA, and the lowest 20th century temp at the time of the end of the reconstruction.

I would like to know more about this.RonCram 13:51, 19 May 2007 (UTC)[reply]

Law of Diminishing Returns applied to Atmospheric CO2

[edit]

Interesting post by Don Keiller on two points. The second one is very important. I have been looking for a citation on the second point and here it is:

I wet myself laughing reading the NS article- particularly the simplistic links made between CO2 in the atmosphere and warming;

1) “These lags show that rising CO2 did not trigger the initial warming at the end of these ice ages - but then, no one claims it did. They do not undermine the idea that more CO2 in the atmosphere warms the planet.” No mention, of course, of the work by Rothman, D.H., Atmospheric carbon dioxide levels for the last 500 million years Proceedings of the National Academy of Sciences 99 (7): 4167-4171, (2002). In which he states “The resulting CO2 signal exhibits no systematic correspondence with the geologic record of climatic variations at tectonic time scales”

2) “We know CO2 is a greenhouse gas because it absorbs and emits infrared. Fairly basic physics proves that such gases will trap heat radiating from Earth, and that the planet would be a lot colder if this did not happen”.

Again no mention of the Beer-Lambert law- more basic physics- which states that the intensity of transmitted radiation decreases exponentially with increasing depth of absorbing material. Or that the principal IR absorbance bands of amospheric CO2 are already at, or near to, saturation (Thomas, G.E. and Stamnes, K Radiative Transfer in the Atmosphere and Ocean. Cambridge University Press, 1999.) Comment #46 [15]

Problems in Defining the "Global Mean Temperature"

[edit]

Willis Eschenbach posted this on ClimateAudit:

bender, I second Jaye’s request for your definition of the global mean temperature.
For example, the most commonly used “global mean temperature” is not an average of all of the gridcells. it is the average of the two hemispheres averaged separately. As far as I know, there is no theoretical reason that this method (averaging of the hemispheres) is either better or worse than averaging all of the gridcells at once. But they give different answers.
So … which one of these two is the real, authentic global mean temperature, and why? It can’t be both.
Here’s another question. Many of the gridcells have no temperature records, and some of them have never had any temperature records. How can a temperature be “global” when we are not even measuring the temperatures of large portions of the globe?
What was stated, however, and what you challenged, is that “average temperature has no real physical meaning”. In fact, it doesn’t, and here’s why.
Suppose we look at the State of California, which has 58 counties. Let’s suppose we have one temperature measuring station in each county, which we use as the temperature for the county. In addition, each county has an area.
We can ask, “what is the average area of a California county?”. To find out the answer, we add up all of the areas of the individual counties. This total has a real physical meaning — it is the total area of the State of California. We divide this number, which has a real physical meaning, by 58, and we have an average with a real physical meaning.
But look what happens if we ask “what is the average temperature of a California county?” To try to answer this, we proceed as with the area, adding up all of the temperatures of the individual counties. But at that point we notice a strange thing — the sum of the temperatures gives us a “total temperature” of California which is on the order of 750° or so … what does that number represent?
Clearly, this number, the “total temperature of California”, has no real physical meaning. And when we divide it by 58, we get an “average county temperature” which, since it is merely 1/58th of a number with no real physical meaning, clearly cannot have a real physical meaning.
The difference is that temperature is an intensive variable, while area is an extensive variable. Extensive variables (like length, area, or weight), can be averaged. Intensive variables (like temperature, density, or ppmv) cannot be averaged without reference to one or more related extensive variables. (An extensive variable depends on the amount of whatever is being measured, while an intensive variable does not depend on the amount).
With gases, the problem is even more complex. If you take a cubic meter of air at 30°C from Denver (the “Mile-High City), transport it to the seaside and mix it with a cubic meter of air at 10°C, what is the temperature of the resulting mixture? Since we have included an extensive variable (volume), you’d think it would be 20°C … but it’s not. The air in Denver is less dense, both because of elevation and temperature, so the average is only about 18°C.
Finally, the underlying assumption is that atmospheric temperature is a proxy for the heat content of the air. What we want to know is whether the heat content of the earth is increasing. But we cannot do that without including the humidity of the air. Moist air contains more energy than dry air. The energy content of the air can change significantly without the temperature changing at all. This is actually a very common occurrence when the moisture in the air is either condensing or freezing, or when clouds are evaporating.
For all of these reasons, in fact the “global mean temperature” has no physical meaning. It can be useful in a general sense (summer is warmer than winter), but it is not an accurate physical measure in the same sense as average area or average length.
w. [16] See comment #110.

Thanks for the thoughts, Willis. RonCram 02:09, 21 May 2007 (UTC)[reply]

Willis added some additional thoughts on this topic in comment #156:

While I am happy to discuss this elsewhere, the reason that it is important to the current topic is that we have several “global mean temperature” dataesets, which show both different trends and different anomalies. Because “global mean temperature” has no agreed upon meaning, none of these datasets is theoretically superior to any other. This has a couple of effects.
1) People are free to choose which “global mean temperature” dataset they wish to use to compare and fit their proxy data … which in turn changes the result of the proxy exercise in whatever direction they may prefer.
2) It increases the uncertainty of both the data and the proxy reconstruction. For example, even using a single dataset, an average of all of the stations in the world shows a different trend than averaging the hemispheres individually and then averaging the two hemispheres. Which one is correct? We can’t say, there is no theoretical reason to prefer one over the other, but it certainly must increase the uncertainty of whichever one we may choose.
For example, were all of the various proxies in the graphic above done using the same “global mean temperature” dataset? I would doubt it, although I don’t know … but if they are not, it must perforce increase the uncertainty.

Thank you again, Willis. RonCram 02:53, 21 May 2007 (UTC)[reply]

Divergence and Alternative Hypotheses

[edit]

Climate reconstructions based on tree ring growth, such as Mann conducted in MBH98 (the Hockey Stick), have come under fire because of "divergence" with the temperature record over the last 20 years or so. In other words, tree ring growth does not correlate with the temperature record as well as scientists thought it did. Determining why this is so (Are certain trees bad proxies? Or are there other problems? Are there any good tree ring proxies?) is an important endeavor. The fact divergence exists is one reason people are skeptical of temperature reconstructions. A regular poster on ClimateAudit going by "bender" (I think this is Michael Bender, the climatologist from Princeton but I'm not certain) posted this:

We’re back on track. I agree with all these points in #132 and #133. Independence (i) among observations within a series and (ii) among recon series are critical issues that I think have not been dealt with adequately (in the primary literaure or in AR4). THIS is precisely why I am skeptical about the recons - NOT because I am skeptical about the concepts of growing season length or mean fields.
Divergence is consistent with a number of alternative hypotheses:
(i) a new forcing agent (Briffa, AR4) such as “global dimming”
(ii) nonlinear response to multiple climate inputs (D’Arrigo & Wilson)
(iii) proxies not responding to temperature at all (John A et al.)
As Jaye says, controlled experiments are the most logical means to figuring this out. The problem is cost. Even greenhouse space costs. Yet you have some other commenters suggesting that any experimentation at all is a waste of taxpayer money.
Well, we can’t do NOTHING. The AGW hypothesis is too critical to not do ANYTHING about it. The least we can do is study it.[17] See comment #134. RonCram 02:31, 21 May 2007 (UTC)[reply]

"Twilight Zone" continuum between clouds and dry aerosols

[edit]

A post by "Rererence" [18] (comment #9) says:

A continuum exists between clouds and dry aerosols, as reported recently in Geophysical Research Letters by scientists from this Branch and colleagues at the Weizmann Institute in Israel. This zone of gradual decline — a twilight zone– consists of dissipating cloud droplets and hydrated aerosols. These “in between particles” act to enhance radiances and aerosol optical depth around clouds and throughout a zone extending 10-20 km from the nearest cloud. Because of the widespread distribution of clouds, we expect 30% - 60% of the cloud free atmosphere to be affected. If climate models do not properly represent the physical processes leading to this continuum in the twilight zone, the models will be incorrectly estimating aerosol direct effects and forcing.

This looks like an interesting problem. You can find the full story here. [19]RonCram 13:36, 21 May 2007 (UTC)[reply]

Confusion in evaluating the "divergence" problem

[edit]

As readers here know, dendroclimatologists have a real problem with divergence. Within the last 20-30 years, when we have a good temperature record, tree ring growth does not correlate well with temperature change. The scientists are trying to figure out what that means and whether or not they have to choose between saving their discipline or saving global warming. In the post below, ClimateAudit poster "mikep" is confused by a statement made by D'Arrico. It appears to me that D'Arrico is guilty of circular reasoning, assuming the very facts she is trying to establish.

I have just been glancing at the D’Arrigo divergence paper (pre-publication version at http://www.ldeo.columbia.edu/~liepert/pdf/DArrigo_GPC2007.pdf and came across the following statement:
"The principal difficulty is that the divergence disallows the direct calibration of tree growth indices with instrumental temperature data over recent decades (the period of greatest warmth over the last 150 years), impeding the use of such data in climatic reconstructions. Consequently, when such data are included, a bias is imparted during the calibration period in the generation of the regression coefficients. Residuals from such regression analyses should thus be assessed for biases related to divergence, as this bias can result in an overestimation of past temperatures and an underestimation of the relative magnitude of recent warming (Briffa et al. 1998a and b).”
I found this puzzling. Surely the biggest difficulty is that we don’t know whether there was a divergence problem before we have any direct measurements of temperature to calibrate against. Given the many factors that appear to affect the width and density of tree rings the balance between these factors may well have been different in the past and the existence of a divergence problem now is (on some sort of “uniformitarian” argument) evidence that it might well have happened in the pre-instrumental past. Indeed some of the trees which have not responded in the instrumental past may have responded in the pre-instrumental past (so we may already have ahd a divergence problem without knowing it!).

Thank you, mikep! I look forward to reading this paper by D'Arrico. RonCram 13:47, 21 May 2007 (UTC)[reply]

Ocean-Atmosphere Interface

[edit]

I am totally convinced the General Circulation Models (GCMs) cannot do a good job of modeling the ocean-atmosphere interface. If climate scientists would only spend more time trying to understand climate variability prior to the 20th century, they would learn a great deal. I am convinced the impact of the PDO is much bigger than most climate scientists are willing to admit. In the post below David Smith discusses some of the realities that make it difficult to model:

These Charts of the Week come from the Europeans cooperative meteorology group ECMWF.
Number one is a global cross-section of the upper ocean temperature anomaly, along the Equator, given here. [20] The gray areas are continents (South America, Africa, etc) while the yellows are warmer-than-climatological and the greens and blues are cooler-than-normal. Note the lurking La Nina cool conditions in the eastern Pacific.
The actual temperatures are given here.[21] Since the average ocean depth is about 3700 meters, this chart represents less than 10% of the ocean. Mentally extrapolate that cold blue area downwards to 3700 meters and the small amount of ocean warmth (by human standards) becomes apparent, even here at the Equator.
Chart 3, given here,[22] is a cross-section of the Pacific temperature anomaly at 140W. Green and blue are cool, yellow and orange are warm. It shows the pending cool La Nina near the Equator (it has interesting dual lobes) and a warm region farther north (I believe that is reinforcement for the cool-phase PDO).
Chart 4, given here,[23] is a north-south cross-section of mid-Atlantic temperatures. Note the massive upweeling of cool water along the Equator. Also of note is the very warm water (+28C) and its depth (not much). The depth of very warm water is important to hurricane intensity - storms tend to be strongest when passing over ocean which has deep warm water, deep enough to keep the churning of the ocean from mixing in cool subsurface water.
Finally, the Atlantic temperature anomaly map, given here,[24] shows no particular subsurface temperature anomaly. Some of the near-Equator patterns may be AMM-related.
Interesting things, our oceans. (See comment #3)[25]

David later responded to a question by another poster saying:

Re #6 Hello, rj. That was a late-night post and my wording was poor. I was trying to note is that the quantity of water that we humans consider “warm” is only a very small part of the ocean. (Note by Ron: the oceans have been cooling since 2003.)
I agree with your point on the effect on climate of even a 1 or 2C change in surface temperature. A seemingly small change in ocean mixing, which brings more cold deep water to the surface, could (and does) affect climate.
That is especially true in the tropics, where there is something of a “magic temperature” (28C) above which the ocean releases considerably more energy into the atmosphere via thunderstorms. If cool deep water manages to reduce the tropical ocean surface from, say 29C to 27C, then the release of tropical ocean heat is reduced considerably. Less tropical heat into the atmosphere affects global climate.
There is evidence that the tropical Indian Ocean (IO) temperature undergoes considerable swings, and there have been decadal-scale times when the IO temperature fell below 28C across the entire basin. That likely affected global climate. These IO temperature swings probably come from changes in upwelling of cold water. (comment #14)[26]RonCram 14:01, 21 May 2007 (UTC)[reply]

Heat content of the oceans

[edit]

Apparently the oceans have not been cooling as much as previously reported. A correction to the earlier reports shows that the oceans have not been warming over the last five years either. Pielke blogs about it here. [27] RonCram 22:52, 7 June 2007 (UTC)[reply]

An interesting article on snow bias in the temperature record

[edit]

Abstract The radiation shield bias of the maximum and minimum temperature system (MMTS) relative to the US Climate Reference Network (CRN) was investigated when the ground surface is snow covered. The goal of this study is to seek a debiasing model to remove temperature biases caused by the snow-covered surface between the MMTS and the US CRN. The side-by-side comparison of air temperature measurements was observed from four combinations of temperature sensor and temperature radiation shield in both MMTS and US CRN systems: (a) a standard MMTS system; (b) an MMTS sensor housed in the CRN shield; (c) a standard US CRN system; and (d) a CRN temperature sensor housed in the MMTS shield. The results indicate that the MMTS shield bias can be seriously elevated by the snow surface and the daytime MMTS shield bias can additively increase by about 1 °C when the surface is snow covered compared with a non-snow-covered surface. A non-linear regression model for the daytime MMTS shield bias was developed from the statistical analysis. During night-time, both the cooling bias and the warming bias of the MMTS shield existed with approximately equal frequencies of occurrence. However, the debiased night-time data based on the linear model developed in this study was less significant due to relatively smaller biases during night-time. The debiasing model could be used for the integration of the historical temperature data in the MMTS era and the current US CRN temperature data and it also could be useful for achieving a future homogeneous climate time series. This article is a US Government work and is in the public domain in the USA. Published in 2005 by John Wiley & Sons, Ltd.


Received: 10 November 2004; Revised: 3 February 2005; Accepted: 8 February 2005 [28]RonCram 06:18, 8 June 2007 (UTC)[reply]

Warm bias in the surface temperature record

[edit]

A new effort is underway to determine the quality of the surface temperature record and to document any warm bias at local surface stations.[29] The effort is being led by Anthony Watts, a regular contributor to ClimateAudit.org. It is fully supported by both Steve McIntyre and Roger Pielke, Sr. [30] [31] If you have a camera and can follow directions, you could be a part of this team. RonCram 06:18, 8 June 2007 (UTC)[reply]

Hans Errens page on Global Warming

[edit]

Hans is a regular contributor to ClimateAudit.org. He has an interesting web page here. [32] RonCram 13:28, 8 June 2007 (UTC)[reply]

Unwarranted adjustments to the temperature record

[edit]

This article needs to address two very important issues: 1. the controversy around unwarranted government adjustments to the temperature record and 2. ongoing research into the controversy regarding poor siting of temperature stations leading to a warming bias. Government adjustments to the temperature record are continuing. Thanks to David Smith for his post on ClimateAudit.org for these government images I link to below. See comment 87.[33] Compare the historical temperatures ranges in the two images and relative changes to years 1935 and 1998. The image from 1999 can be found here. [34] The image from 2007 is here. [35] In 1999, temps for 1935 and 1998 were the same. However, by 2007 the temp for 1998 was considerable higher than 1935. I have done enough reading now to be convinced that the 1990s were NOT warmer than the dust bowl years of the 1930s. I believe alarmists like Jim Hansen are playing with the temperature record. In effect, these "adjustments" to the temperature record are done in order to create evidence of global warming. I also believe there are a number of warming biases in our land surface temperature network as pointed out by the Davey and Pielke paper in 2005. [36] I am aware of the Peterson paper in 2006 which tried to say the problems Davey and Pielke found in eastern Colorado are not wide spread and there is no warming bias. However, there are a number of problems with the Peterson paper. [37] Pielke has called for a thorough documentation of the sites, including photographs and that effort is underway now led by Anthony Watts [www.surfacestations.org] and encouraged by Pielke [38] [39] and Steve McIntyre.[40] I need some help locating additional reliable sources on temperature adjustments. If anyone would like to participate in this effort, you can go to my User Page and click the "Email this user" button and we can discuss where this information may be found. RonCram 11:09, 9 June 2007 (UTC)[reply]

I found another link to Davey Pielke 2005 paper. I believe this is the same paper as referenced above. [41]RonCram 14:21, 9 June 2007 (UTC)[reply]

Anthony Watts on why the quality of temperature stations is so poor

[edit]

Anthony writes: The US Weather Bureau was formed primarily for the purpose of making forecasts and warnings. Here are some historical excerpts

1870: A Joint Congressional Resolution requiring the Secretary of War “to provide for taking meteorological observations at the military stations in the interior of the continent and at other points in the States and Territories…and for giving notice on the northern (Great) Lakes and on the seacoast by magnetic telegraph and marine signals, of the approach and force of storms” was introduced. The Resolution was passed by Congress and signed into law on February 9, 1870, by President Ulysses S. Grant. An agency had been born which would affect the daily lives of most of the citizens of the United States through its forecasts and warnings.

May 30, 1889: An earthen dam breaks near Johnstown, Pennsylvania. The flood kills 2,209 people and wrecks 1,880 homes and businesses.

October 1, 1890: Weather Service is first identified as a civilian enterprise when Congress, at the request of President Benjamin Harrison, passes an act creating a Weather Bureau in the Department of Agriculture.

A weather sensitive sports event of this first year: 15th running of the Kentucky Derby.

1910: Weather Bureau begins issuing generalized weekly forecasts for agricultural planning; its River and Flood Division begins assessment of water available each season for irrigating the Far West.

1914: An aerological section is established within the Weather Bureau to meet growing needs of aviation; first daily radiotelegraphy broadcast of agricultural forecasts by the University of North Dakota

1933: A science advisory group apprizes President Franklin D. Roosevelt that the work of the volunteer Cooperative Weather observer network is one of the most extraordinary services ever developed, netting the public more per dollar expended than any other government service in the world. By 1990 the 25 mile radius network encompasses nearly 10,000 stations.

1941: Dr. Helmut Landsberg, “the Father of Climatology,” writes the first edition of his elementary textbook entitled, “Physical Climatology.” Two women are listed as observer and forecaster in the Weather Bureau.

So as you can see, the US Weather Bureau was formed to provide forecasts to the military, then later as a civilian enterprise, with an emphasis on forecasts and warnings. Climatology was an afterthought, only coming into being around 1941.

The problem that we have today is that researchers think of the data gathering as having been done scientifically, with appropriate controls, when in reality its been mostly ad hoc and left to the whims and nuances of the observer and the location.

End Quote. See Comment #68. [42]

Anthony Watts is starting to get some press. [43]

Post by bender

[edit]

Here's the quote: Now, you tell me what the RCers couldn’t: what caused the Arctic warming in the 1930s-40s? Hansen et al. (2007) admit that they do not know, and are ready to give up looking for a regional-scale forcing, factor X. You will notice how that issue was dodged completely at RC? Is it possible that our estimate of the current global warming trend is exaggerated by a modern recurrence of factor X? Yes, it is.

“Where is the heat coming from?” This sounds like Steve Milesworthy. Remember: you have not censused the temperature field. You have sampled it. Incompletely (and highly inefficaciously, I might add). Now, what if the heat bounces around chaotically from areas where you’ve sampled accurately (precious few) to areas where you’ve not sampled or sampled inaccurately (the great majority of land surace, sea surface, ocean depth, earth depth, upper atmosphere, etc.)? What will that do to your search for “global” forcings and your “global” parameter estimates? It will bias them. In short, I am suggesting the CO2 sensitivity parameter may be over-estimated because of SEVERAL reasons, inappropriate model choice and faulty parameterization being just one of these.

None of these issues are on the table however - because the issue of uncertainty surrounding parameter estimates has been taken off the table from day one. If the “experts” would put the statistical issues on the table, it could be discussed by people more qualified than myself. So why is it off the table? Why is it the unmentionable topic? Why is it left to the “unqualified” people to do what the qualified should be doing?

In conclusion: climate chaos is not a hypothesis designed to discredit the models. It is just one of the reasons why climatologists might be hooked on overfitting ad hoc models. Overfit, ad hoc models are error-prone models. Maybe there is some error in the estimation of the CO2 sensitivity coefficient? It is just a question. Why can this question can not be asked without stirring a witch-hunt? End quote-- I think bender is right that the IPCC is overestimating CO2 sensitivity because increasing CO2 is subject to the Law of Diminishing Returns. Early increases in CO2 generated the greatest warming and any new CO2 will not increase warming much if at all. RonCram 04:45, 10 June 2007 (UTC)[reply]

Gerald Browning on climate models

[edit]

Dr. Browning has written a number of peer-reviewed articles and has worked on climate models. He knows the limits of their abilities quite well. Here are some quotes:

  • "You seem to have forgotten that I worked on those “glorious” climate models at NCAR for a number of years and you might want to look at the manuscript on microphysics that I wrote with Chungu Lu. I am well aware of the “physics” in these models and the crudeness of the assumptions is pathetic." See comment #110. [44]
  • "Take a look at the documentation for the WRF model on the WRF site... the document called "A Description of the Advanced Research WRF Version 2." Even on the contents page it is possible to see that are many different parameterizations for the same physical phenomenon included in the same model and the number of adjustable parameters is not 3 or 4 in any of the microphysics schemes, let alone the boundary layer, turbulent mixing and model filters, etc. The climate models also have many knobs, but the parameterizations are even cruder because the climate models do not resolve any of the smaller scales of motion." See comment #127. [45]
  • See comment #133. [46]
  • Top of head I can already think of several examples (of chaos in climate):
- NINO is an obvious one
- number of large scale extreme events per year is another (or in physical terms energy dissipated by such events per year)
- NAO is an interesting case that is probably interacting with Nino what would yield a certain amount of pseudo periods
- a very prominent case is the arctic ice mass variation - chaotic with badly understood multidecadal cycles superposing on shorter term variations
- average yearly cloudiness variation is not understood but certainly chaotic. See comment #156. [47]
  • To sum it up I’ll quote R.Pielke :
The claim by the IPCC that an imposed climate forcing (such as added atmospheric concentrations of CO2) can work through the parameterizations involved in the atmospheric, land, ocean and continental ice sheet components of the climate model to create skillful global and regional forecasts decades from now is a remarkable statement. That the IPCC states that this is a “much more easily solved problem than forecasting weather patterns just weeks from now” is clearly a ridiculous scientific claim. As compared with a weather model, with a multi-decadal climate model prediction there are more state variables, more parameterizations, and a lack of constraint from real-world observed values of the state variables. See comment #156. [48]
  • "Everybody should read what the people really do with ENSO simulations. Actually simulation is a quite strong word - what they do is to assume a delayed oscilator (because a qualitative argument shows that such a system if it is extremely simplified would behave like a delayed oscilator) then they add an “atmospheric noise” aka assume that everything else is random and make computer runs. As your basic model is both bounded and mathematicaly reasonable the output stays bounded and looks reasonable. The random part creates variability and the oscilator part creates stability. Of course both the model and the results have nothing to do with the reality but if you do, say, a 200 years run, you get a pseudo periodic curve (f.ex for the pressures) that looks like an ENSO. Now you can infinitely vary boundary and initial conditions, the randomizing of the “noise” , couple it , uncouple it and generally publish one paper per year. However all what such papers do is to say in a very sophisticated and costly way what a delayed oscilator with a random component does when certain assumptions are met or not met. No chance that you ever get in unstabilities or fast transitions - a delayed oscilator doesn’t use to do that. Now if you were to say “Hey guys but ENSO is anything but a delayed oscilator with noise” you’d only get uncomprehending stares and a “Huh? What are you talking about and who are you first place?” See comment #172. [49]
  • "The problem with atmospheric models is that the kinematic viscosity is unphysically large (not even close to resolving the minimal scale for the atmosphere if one exists) and the forcing terms have O(1) errors at all scales, i.e. the errors in the forcing terms are not small. Thus atmospheric models do not satisfy the above conditions and never will. In addition, the hydrostatic system is ill posed and the nonhydrostatic system is sensitive (displays fast exponential growth) near a jet." See comment #219. [50]

IPCC suppressing evidence in peer reviewed publications

[edit]

Roger Pielke has been very aggressive lately about the IPCC cherry picking articles that favor the alarmist position and suppressing climate views and evidence contrary to their position. [51] [52] [53]RonCram 13:30, 24 June 2007 (UTC)[reply]

IPCC and the "forecasting vs. prediction" debate

[edit]

Walter Starck had an interesting post on Pielke's blog stating: Armstrong and Green in a new study auditing the methodology used in global warming forecasts state the following: “In apparent contradiction to claims by some climate experts that the IPCC provides “projections” and not “forecasts, the word “forecast” and its derivatives occurred 37 times, and “predict” and its derivatives occur 90 times in the body of Chapter 8. Recall also that most of our respondents (29 of whom were IPCC authors or reviewers) nominated the IPCC report as the most credible source of forecasts (not projections) of global average temperature.”

A&G also state: “…such models are, in effect, mathematical ways for the experts to express their opinions. To our knowledge, there is no empirical evidence to suggest that presenting opinions in mathematical terms rather than in words will contribute to forecast accuracy.”

“Pilkey and Pilkey-Jarvis (2007) concluded that the long-term climate forecasts that they examined were based only on the opinions of the scientists. The opinions were expressed in complex mathematical terms. There was no validation of the methodologies.”

“It is hard to understand how scientific forecasting could be conducted without any reference to the literature on how to make such forecasts.”

This study has not yet been formally published but has been posted for open source peer review at: http://publicpolicyforecasting.com/ It is well worth reading.

Comment by Walter Starck — June 23, 2007 @ 7:28 pm RonCram 13:30, 24 June 2007 (UTC)[reply]

No quality certification of climate models

[edit]

It was interesting to read a post on Climate Audit by Steve Mosher. He writes:

This is not about language choice. Good code can be written in every language. And Good programmers should know all the major languages, it’s not rocket science. The issue is this.
My toaster has to have UL certs.
My Computer has to have FCC certs
Nasa’s global climate model should have to pass a certification test.
You see, they cannot have it both ways. They cannot, on one hand, argue that their Modelling is mission critcial to planet earth, and then, on the other hand, wave off Quality questions. (See comment #82) [54]

Good point, Steve. It seems the climate modelers have not learned the basics of scientific forecasting as J. Scott Armstrong recently pointed out. [55] RonCram 05:30, 25 June 2007 (UTC)[reply]

Lubos Motl on Climate Sensitivity and CO2 Saturation

[edit]

Motl writes:

You should realize that the carbon dioxide only absorbs the infrared radiation at certain frequencies, and it can only absorb the maximum of 100% of the radiation at these frequencies. By this comment, I want to point out that the "forcing" - the expected additive shift of the terrestrial equilibrium temperature - is not a linear function of the carbon dioxide concentration. Instead, the additional greenhouse effect becomes increasingly unimportant as the concentration increases: the expected temperature increase for a single frequency is something like
1.5 ( 1 - exp[-(concentration-280)/200 ppm] ) Celsius
The decreasing exponential tells you how much radiation at the critical frequencies is able to penetrate through the carbon dioxide and leave the planet. The numbers in the formula above are not completely accurate and the precise exponential form is not quite robust either but the qualitative message is reliable. When the concentration increases, additional CO2 becomes less and less important.

This is an extremely important concept that climate scientists seem unable to grasp. RonCram 15:32, 2 July 2007 (UTC)[reply]

Falsifcation Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics

[edit]

A new paper by Gerhard Gerlich and Ralf D. Tscheuschner claims to have falsified the physics underlying the greenhouse theory. [56] I have only skimmed the paper so far. It has not been peer reviewed for publication but a number of the author's academic friends have made some comments on the paper. It will be interesting to see what kind of reception this paper gets. RonCram 03:38, 29 July 2007 (UTC)[reply]

Evidence the Earth is cooling

[edit]

We have evidence the oceans have been cooling since 2003. I just came across an older article that indicates evidence of cooling even earlier. [57] I have not had a chance to read this yet, but it looked interesting. RonCram 14:03, 30 July 2007 (UTC)[reply]

Line-by-line calculation of atmospheric fluxes and cooling rates

[edit]

I have not had a chance to read this yet, but it looks interesting. [58] RonCram 14:47, 4 August 2007 (UTC)[reply]

Mitigating New York City's Heat Island with Urban Forestry

[edit]

This study indicates the heat island effect for NYC is 7F. Pretty significant. Much more significant than climate scientists would want you to think. [59] RonCram 14:34, 5 August 2007 (UTC)[reply]

Steve McIntyre debunks Peterson 2003

[edit]

Roger Pielke has already refuted Peterson's 2006 paper. [60] Now McIntyre has debunked Peterson's 2003 paper. [61] Peterson claimed “rural station trends were almost indistinguishable from series including urban sites.” McIntyre shows that cities have a 2 degree C warming in the 20th century over rural sites. Anthony Watts and SurfaceStations.org is showing that a large number of the rural stations have a warming bias from poor quality siting. RonCram 02:42, 6 August 2007 (UTC)[reply]

Criticism of the IPCC

[edit]

Interesting article in the Financial Times criticizes the IPCC for a flawed process, actions outside its charter, errors in the way emissions scenarios were being calculated, withholding data and methods and pervasive bias. [62] RonCram 14:19, 6 August 2007 (UTC)[reply]

Recent scientific opinion on climate change

[edit]

Dr. Schulte conducted a study on climate papers since 2004. Of the 539 papers reviewed, only 45% expressed an explicit or implicit endorsement of the majority view. About 6% (total of 32 papers) expressed an explicit or implicit rejection of the consensus. If unanimity on the subject ever existed, it does not exist now. [63] RonCram

Card's article discusses Mann's unethical behavior

[edit]

Newsweek debunks itself

[edit]

"Newsweek Editor Calls Mag's Global Warming 'Deniers' Article 'Highly Contrived'" [64] [65] [66] RonCram 14:30, 13 August 2007 (UTC)[reply]

America COMPETES Act

[edit]

This directly relates to the global warming controversy because this new US law requires government scientists to provide their data, methods and code. This will apply to NASA GISS climatologists and NOAA climatologists. These agencies do not provide the relevant information now as requested by researchers such as Stephen McIntyre. Here is a relevant excerpt:

SEC. 1009. RELEASE OF SCIENTIFIC RESEARCH RESULTS.
(a) Principles- Not later than 90 days after the date of the enactment of this Act, the Director of the Office of Science and Technology Policy, in consultation with the Director of the Office of Management and Budget and the heads of all Federal civilian agencies that conduct scientific research, shall develop and issue an overarching set of principles to ensure the communication and open exchange of data and results to other agencies, policymakers, and the public of research conducted by a scientist employed by a Federal civilian agency and to prevent the intentional or unintentional suppression or distortion of such research findings. The principles shall encourage the open exchange of data and results of research undertaken by a scientist employed by such an agency and shall be consistent with existing Federal laws, including chapter 18 of title 35, United States Code (commonly known as the `Bayh-Dole Act’). The principles shall also take into consideration the policies of peer-reviewed scientific journals in which Federal scientists may currently publish results. (page 11 of 208 of the Act) [67] RonCram 01:04, 20 August 2007 (UTC)[reply]

On nonstationarity and antipersistency in global temperature series

[edit]

This peer reviewed paper by Karner finds the climate is dominated by negative feedbacks and does not see any evidence of anthropogenic climate change. Here's a quote from the abstract:

1] Statistical analysis is carried out for satellite-based global daily tropospheric and stratospheric temperature anomaly and solar irradiance data sets. Behavior of the series appears to be nonstationary with stationary daily increments. Estimating long-range dependence between the increments reveals a remarkable difference between the two temperature series. Global average tropospheric temperature anomaly behaves similarly to the solar irradiance anomaly. Their daily increments show antipersistency for scales longer than 2 months. The property points at a cumulative negative feedback in the Earth climate system governing the tropospheric variability during the last 22 years. The result emphasizes a dominating role of the solar irradiance variability in variations of the tropospheric temperature and gives no support to the theory of anthropogenic climate change. The global average stratospheric temperature anomaly proceeds like a 1-dim random walk at least up to 11 years, allowing good presentation by means of the autoregressive integrated moving average (ARIMA) models for monthly series.

RonCram 11:21, 21 August 2007 (UTC)[reply]

New paper by Roy Spencer

[edit]

Spencer found a negative feedback in the tropics that he thinks confirms the Lindzen hypothesis of the "infrared iris." [68]RonCram 02:31, 9 October 2007 (UTC)[reply]

New research by Stephen Schwartz on climate forcing of CO2

[edit]

Stephen Schwartz of the Environmental Sciences Department/Atmospheric Sciences Division of Brookhaven National Laboratory has just released a preprint of an article accepted for publication by Journal of Geophysical Research titled "Heat Capacity, Time Constant and Sensitivity of Earth's Climate System." Quote:

ABSTRACT. The equilibrium sensitivity of Earth's climate is determined as the quotient of the relaxation time constant of the system and the pertinent global heat capacity. The heat capacity of the global ocean, obtained from regression of ocean heat content vs. global mean surface temperature, GMST, is 14 ± 6 W yr m-2 K-1, equivalent to 110 m of ocean water; other sinks raise the effective planetary heat capacity to 17 ± 7 W yr m-2 K-1 (all uncertainties are 1-sigma estimates). The time constant pertinent to changes in GMST is determined from autocorrelation of that quantity over 1880-2004 to be 5 ± 1 yr. The resultant equilibrium climate sensitivity, 0.30 ± 0.14K/(W m-2), corresponds to an equilibrium temperature increase for doubled CO2 of 1.1 ± 0.5 K. The short time constant implies that GMST is in near equilibrium with applied forcings and hence that net climate forcing over the twentieth century can be obtained from the observed temperature increase over this period, 0.57 ± 0.08 K, as 1.9 ± 0.9 W m-2. For this forcing considered the sum of radiative forcing by incremental greenhouse gases, 2.2 ± 0.3 W m-2, and other forcings, other forcing agents, mainly incremental tropospheric aerosols, are inferred to have exerted only a slight forcing over the twentieth century of -0.3 ± 1.0 W m-2.http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf

Schwartz goes on to say that the value found for climate sensitivity is well below the current estimates as publicized by the IPCC. RonCram 13:50, 22 August 2007 (UTC)[reply]

Others who propose lower climate sensitivity

[edit]

Major climate shifts start in the ocean

[edit]

"A new dynamical mechanism for major climate shifts" is a paper by Anastasios A. Tsonis, Kyle Swanson, & Sergey Kravtsov.[71] For a full text draft of the article. [72] RonCram 15:35, 26 August 2007 (UTC)[reply]

Another paper claims the solar system regulates climate. [73] RonCram 14:49, 27 August 2007 (UTC)[reply]

Unwarranted adjustments to the temperature record

[edit]

Apparently, some warming is intentional. The keepers of the temp record make adjustments without explaining them or justifying them. Sometimes the adjustments make sense and sometimes they do not. Here is what McIntyre says about some GISS adjustments of former Soviet Union stations:

Here’s what I’ve noticed about the data (posted up here)
1) After Jan 1991, when there is only one version (version 2), the GISS-combined is exactly equal to that version.
2) For December 1990 and earlier, Hansen subtracts 0.1 deg C from version 2, 0.2 deg C from version 1 and 0.3 deg C from version 0.
If I average the data so adjusted, I get the NASA-combined version up to rounding of 0.05 deg C. Why these particular values are chosen is a mystery to say the least. Version 1 runs on average a little warmer than version 0 where they diverge ( and they are identical after 1980). So why version 0 is adjusted down more than version 1 is hard to figure out. [74]

If you adjust older temps down, more recent temps look warmer. In the case of version 0, Hansen's adjustments are nearly half the observed global warming. RonCram 01:28, 31 August 2007 (UTC)[reply]

[edit]
  • Stott, Philip. "AntiEcoHype – Climate Change & Global Warming – A Selection of Critical Commentaries by Philip Stott". Philip Stott's personal website.
[edit]

World Conference on Research Integrity to Foster Responsible Research

[edit]

Steve McIntyre reports "The World Conference on Research Integrity convened in Portugal from 16 to Sept 19. They refer to two incidents - the misrepresentation of the examination of station history in China and the NASA Y2K problem..." [77] RonCram 13:44, 28 September 2007 (UTC)[reply]

Howard Alper has a nice quote here. [78] RonCram 18:55, 5 November 2007 (UTC)[reply]

A skeptical layman

[edit]

Warren Myer writes an interesting blog. [79] RonCram 13:44, 28 September 2007 (UTC)[reply]

Arctic sea ice

[edit]

Kahl JD et al., “Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years”, Nature, v361, 335-337 (1993) RonCram 01:19, 29 September 2007 (UTC)[reply]

Northwest passage was open in 1905 and 1944

[edit]

A great deal of talk about the opening of the Northwest Passage has convinced some people that global warming is real. The melting arctic sea ice is more a result of regional warming than global warming. Southern Hemisphere sea ice is not melting. Perhaps more important is to realize that it is within normal climate variation for the Northwest Passage to open up occasionally. Roald Amundsen of Norway was the first person to successfully navigate the fabled Northwest passage (1903- 1905). [80] The passage opened again in 1944. [81] In total more than 60 vessels have taken it over the years. [82]RonCram 12:13, 29 September 2007 (UTC)[reply]

Reports of "unprecedented" arctic sea ice also happened in 1922. Note the discussion of warm currents at the time. [83] RonCram 13:22, 9 November 2007 (UTC)[reply]

Scientific Responsibility in Global Climate Change Research

[edit]

I came across a letter to the editor of Science magazine written in 1999 by SI Rasool that is still fitting today. It is called "Scientific Responsibility in Global Climate Change Research" and can be found here. [84]RonCram 14:26, 30 September 2007 (UTC)[reply]

Stephen Schwartz wrote an article called "Quantifying climate change — too rosy a picture?" The abstract says "The latest report from the Intergovernmental Panel on Climate Change assesses the skill of climate models by their ability to reproduce warming over the twentieth century, but in doing so may give a false sense of their predictive capability." The level of uncertainty of catastrophic consequences is much higher than the IPCC suggests. RonCram 19:24, 30 September 2007 (UTC)[reply]

High interannual variability of arctic sea ice

[edit]

Laxon's paper is quite helpful in explaining that arctic sea ice levels are a result of dynamic forcings (wind and ocean) related to regional climate rather than global warming. [85] RonCram 03:43, 9 October 2007 (UTC)[reply]

It seems JPL did a study on Arctic ice melt and came to a similar finding. "The results suggest not all the large changes seen in Arctic climate in recent years are a result of long-term trends associated with global warming." [86] —Preceding unsigned comment added by RonCram (talkcontribs) 01:16, 27 November 2007 (UTC)[reply]

Do climate models use scientific forecasting?

[edit]

Here are a couple of interesting articles on J. Scott Armstrong.

These news articles came out because of Armstrong's article claiming global warming predictions were unfounded. [87] For more information, see his website http://www.forecastingprinciples.com/

Also, Orrin Pilkey has written a book called "Useless Arithmetic" that talks about the impossibility of accurately predicting nature. RonCram 11:22, 9 October 2007 (UTC)[reply]

The failure of climate models

[edit]

Regional climate models fail more often than they are correct. GCMs fail also, especially in the prediction of warming in Antartica. [88] RonCram 11:40, 9 October 2007 (UTC)[reply]

Other articles support this point. [89]RonCram (talk) 10:38, 28 January 2008 (UTC)[reply]

Interesting paper by Svensmark

[edit]

He believes the evidence shows the surface temp record is not reliable. [90] RonCram 12:17, 9 October 2007 (UTC)[reply]

Report Card on Arctic by NOAA

[edit]

See this [91] RonCram 15:44, 20 October 2007 (UTC)[reply]

Are the oceans soaking up less CO2?

[edit]

See this paper. Rorsch A, Courtney RS & Thoenes D, 'The Interaction of Climate Change and the Carbon Dioxide Cycle' E&E v16no2 (2005)RonCram 15:44, 20 October 2007 (UTC)[reply]

Wegman Report and Fact Sheet

[edit]

Wegman Report [92] Fact Sheet [93] RonCram 19:11, 5 November 2007 (UTC)[reply]

John Christy's WSJ piece

[edit]

[94]RonCram 13:44, 9 November 2007 (UTC)[reply]

Interesting comment by MarkW

[edit]

Comment #132 says:

I’ve recently read that the amount of UV produced by the sun varies as much as 100% during the solar cycle.
UV produces ozone.
Ozone is a GHG.
JohnV has been adamant that the changes in energy output are insufficient to directly cause the changes in temperatures that have been seen. This could be one of the missing factors.

See ClimateAudit here. [95] RonCram 14:28, 12 November 2007 (UTC)[reply]

Harris/Mann Climatology

[edit]

See this. [96] RonCram 14:32, 12 November 2007 (UTC)[reply]

Craig Loehle temp reconstruction

[edit]

Loehle did a temp reconstruction without tree ring data. [97] RonCram (talk) 14:58, 20 November 2007 (UTC)[reply]


World Conference on Research Integrity to Foster Responsible Research

[edit]

European Science Foundation has reported on the "World Conference on Research Integrity" which met in Portugal from Sep 16-19. It was organized by the ESF and the U.S. Office of Research Integrity. [98] They discussed two incidents touching on global warming - the misrepresentation of the examination of station history in China and the NASA error found by Steve McIntyre. The misrepresentation regarding Chinese station histories is an issue being pushed by Doug Keenan. Keenan has accused Wang, a co-author of Phil Jones, of unethical behavior. [99] Both of these issues were originally raised by Steve McIntyre. [100] It seems misconduct by climatologists to push an alarmist view of global warming is becoming a more important issue all the time. RonCram (talk) 14:58, 20 November 2007 (UTC)[reply]

Surfacestations.org gets more media attention

[edit]

See this. [101] RonCram (talk) 15:27, 5 December 2007 (UTC)[reply]

New paper by Kiehl admits the obvious

[edit]

"The magnitude of applied anthropogenic total forcing compensates for the model sensitivity." [102] Stephen McIntyre discusses it here. [103] RonCram (talk) 15:27, 5 December 2007 (UTC)[reply]

Ross McKitrick has written two op-ed pieces

[edit]

National Post. [104] Christian Science Monitor. [105] RonCram (talk) 23:09, 5 December 2007 (UTC)[reply]

The paper can be found here. [106] Background info about the research can be found here. [http://icecap.us/images/uploads/MM.JGR07-background.pdf

Willis E on Hansen's model predictions

[edit]

Steve McIntyre has posted Willis E's discussion of Hansen's model predictions. Willis shows that Hansen started the series higher than he should have and when this adjustment is made, it is clear Hansen is overestimating climate sensitivity to CO2. [107] RonCram (talk) 15:26, 12 December 2007 (UTC)[reply]

Petr Chylek's new paper

[edit]

Read the abstract here. [108] RonCram (talk) 01:14, 22 December 2007 (UTC)[reply]

Arctic sea ice refreezing at record pace

[edit]

Just like previous melts of arctic sea ice, the ice is refreezing quickly. Since we did not have satellites in 1906 and 1945, the scientists say the refreezing is at a record pace. [109] RonCram (talk) 18:30, 23 December 2007 (UTC)[reply]

Inhofe's reports

[edit]

Debunking the "consensus." [110] [111] RonCram (talk) 23:13, 23 December 2007 (UTC)[reply]

Steven Milloy on other scientists will low estimate of climate sensitivity

[edit]

Milloy does not have a good reputation among some people, so his compilation needs to be verified. He lists a number of good scientists who have a low estimate of climate sensitivity. [112] RonCram (talk) 19:07, 27 December 2007 (UTC)[reply]

The Earth's climate is seesawing

[edit]

Here is a paper claiming the warming is seesawing back and forth between the north and south Atlantic oceans. [113] RonCram (talk) 19:07, 27 December 2007 (UTC)[reply]

Pielke claims MSU data is more accurate than RSS data

[edit]

MSU data is kept by John Christy in UAH. See Pielke's comment here. [114] RonCram (talk) 12:43, 14 January 2008 (UTC)[reply]

Forecast verifications show global warming has been overpredicted

[edit]

Hansen testified before Congress in 1988 and published his predictions. We now have 20 years of data to show he was wrong.[115] The IPCC published their predictions in 1990 and we now know they were wrong. [116] RonCram (talk) 14:05, 19 January 2008 (UTC)[reply]

Cosmic Rays and Solar Activity

[edit]

Svensmark was not the first to publish on this topic. Palle Bago published in 2000. [117] Sloan and Wolfendale have tested this hypothesis and found it wanting, but I am not convinced. [118] RonCram (talk) 16:56, 6 April 2008 (UTC)[reply]

Problems with the GCMs

[edit]

David Douglass and co-authors published a paper in 2007 showing that an ensemble of 22 GCMs do not match observations in the tropical troposphere. An essential point of AGW theory is that the troposphere will warm more quickly than the surface, especially in the tropics. The GCMs match the theory well. However, neither match the observations. [119]

The Douglass paper was criticized by RealClimate for applying the wrong test to the data (even though it was the test recommended by the IPCC). Cliff Huston decided to apply the more robust t-test and it showed exactly where the problems lie with the GCMs. See his comment here. [120]RonCram (talk) 15:05, 18 May 2008 (UTC)[reply]

Demetris Koutsoyiannis has also presented a presentation comparing climate models to historical climate. Climate models have been making predictions since 1990. We now have 18 years of data to compare. [121]

Pat Frank, a regular contributor to ClimateAudit, published an article in Skeptic magazine on GCMs. [122]RonCram (talk) 19:34, 11 May 2008 (UTC)[reply]

Also, Greg Holloway published in 2004 From Classical to Statistical Ocean Dynamics. Surveys in Geophysics Holloway is very unhappy with ocean modeling and proposes the use of statistical dynamics to correct the models. He writes: "In practice we cannot solve for oceans, lakes or most duck ponds on the scales for which these equations apply." [123]RonCram (talk) 13:24, 13 May 2008 (UTC)[reply]
William Briggs also wrote a blog posting outlining some of the problems with GCMs. [124] RonCram (talk) 04:05, 14 May 2008 (UTC)[reply]
“An extremely simple univariate statistical model called IndOzy” does as good a job at predicting ENSOs as extremely complex GCMs. The authors, Halide and Ridd, make striking observations about what this says about the utility and reliability of the current crop of climate models.[125]RonCram (talk) 04:36, 15 May 2008 (UTC)[reply]
Another new paper has concluded the GCMs are overpredicting global warming and identified the reason. See Sun, D.-Z., Y. Yu, and T. Zhang, 2007: Tropical Water Vapor and Cloud Feedbacks in Climate Models: A Further Assessment Using Coupled Simulations. J. Climate. [126] roger Pielke summarizes the paper:
The message from the Sun et al. study, therefore, is that the models used to make the multi-decadal global climate projections that are reported in the IPCC report are “…that underestimating the negative feedback from cloud albedo and overestimating the positive feedback from the greenhouse effect of water vapor over the tropical Pacific during ENSO is a prevalent problem of climate models.” [127]

Open for editing

[edit]

The entry below is written for the Global warming controversy article and is open for editing by those who would like to see it improved to the point it will be accepted into the article. RonCram (talk) 20:30, 18 May 2008 (UTC)[reply]

Confidence in GCM forecasts

[edit]

The IPCC states it has increased confidence in forecasts coming from General Circulation Models or GCMs. Chapter 8 of AR4 reads:

There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of signifi cant climate warming in response to increasing greenhouse gases.[128]

Skeptics believe this confidence in the models’ ability to predict future climate is not earned. Roger Pielke points out a mistake common among climatologists:

This is a typical mistake they are making; a model is itself a hypothesis and cannot be used to prove anything! The multi-decadal global model simulations only provide insight into processes and interactions, but we must use real world data to test the models. [129]

Pielke also writes:

As a necessary condition for global climate models to even be claimed to have predictive skill, they must have each of the first-order climate forcings and feedbacks that are overviewd in the Summary. At present (2005) none of the models do. If a global model claims predictive skill yet does not have all of the important direct and indirect radiative, and non-radiative forcings, as summarized in the Report, the model results must be interpreted as a process study, and not be communicated to policymakers as predictions (projections). They can be used to express concern that each of the first-order climate forcings are altering our climate, but to transmit results from the models as forecasts in planning is overselling the capability of the models. [130]

Do the models used by AR4 include all of the forcings and feedbacks?

[edit]

No. Working Group 1 placed a deadline on scientific papers. For inclusion in the AR4, papers must be in by May, 2006. Conclusions from papers published after May, 2006 are not included in the AR4.

In 2007, Roy Spencer and co-authors published a report on their observation of a new negative feedback over the tropics. They identified this feedback as the “Infrared Iris”effect hypothesized by Richard Lindzen. Spencer concludes:

Since these intraseasonal oscillations represent a dominant mode of convective variability in the tropical troposphere, their behavior should be considered when testing the convective and cloud parameterizations in climate models that are used to predict global warming.[131]

Since the GCMs did not include the Infrared Iris negative feedback, it is possible other forcings and feedbacks may also be missing.

Do models match important aspects of current climate?

[edit]

The IPCC claims that confidence is high that GCM projections are accurate because the GCMs are able to simulate many features of climate. The AR4 states:

A second source of confidence comes from the ability of models to simulate important aspects of the current climate. Models are routinely and extensively assessed by comparing their simulations with observations of the atmosphere, ocean, cryosphere and land surface. Unprecedented levels of evaluation have taken place over the last decade in the form of organized multi-model ‘intercomparisons’. Models show significant and increasing skill in representing many important mean climate features, such as the large-scale distributions of atmospheric temperature, precipitation, radiation and wind, and of oceanic temperatures, currents and sea ice cover.

Skeptics have published peer-reviewed research indicating the GCMs do not model key observations well. For example, one key component of the theory of Anthropogenic Global Warming is that as atmospheric CO2 increases, the troposphere will warm more quickly than the surface and this will be most pronounced in the tropics. David Douglass and co-authors compared the ensemble of GCMs to observations and found the GCMs were consistent with the theory but did not match observations. [132]

Do model projections match observations over the last two decades?

[edit]

Computer models have been making projections for 18-20 years. Do those projections match with observations? The IPCC says yes:

Model global temperature projections made over the last two decades have also been in overall agreement with subsequent observations over that period (Chapter 1).

Some scientists answer No. Demetris Koutsoyiannis compared climate models to historical climate, looking largely to the hydrological cycle (which also affects temperature). Here are select conclusions from his presentation:

:

• GCMs generally reproduce the broad climatic behaviours at different geographical locations and the sequence of wet/dry or warm/cold periods on a mean monthly scale.
• However, model outputs at annual and climatic (30 year) scales are irrelevant with reality; also, they do not reproduce the natural overyear fluctuation and, generally, underestimate the variance and the Hurst coefficient of the observed series; none of the models proves to be systematically better than the others.
• The huge negative values of coefficients of efficiency at those scales show that model predictions are much poorer that an elementary prediction based on the time average.
• This makes future climate projections not credible.
• The GCM outputs of AR4, as compared to those of TAR, are a regression in terms of the elements of falsifiability they provide, because most of the AR4 scenarios refer only to the future, whereas TAR scenarios also included historical periods. [133]

Other scientists and statisticians have blogged on the issue of validation and verification of GCMs looking mainly at temperature. Roger Pielke Jr. [134] [135][136] William Biggs [137] and Lucia Liljegren [138] have all reached similar conclusions about the GCMs ability to match observations over the last 20 years.

Freeman Dyson reviews a book on the economics of global warming

[edit]

The book is written by William Nordhaus [139]. He seems to have a good handle on the different proposals but does not discuss the science. Thanks to Syl for pointing out this book to me. [140] RonCram (talk) 00:50, 29 May 2008 (UTC)[reply]

Von Storch before the NAS

[edit]

McIntyre has a post on the NAS panel and the Hockey Stick controversy. Von Storch made a presentation and McIntyre links to it. Devastating. [141] RonCram (talk) 04:34, 31 May 2008 (UTC)[reply]

Controversy regarding the sea level rise extent and predictions

[edit]

Dr. Nils-Axel Mörner is the head of the Paleogeophysics and Geodynamics department at Stockholm University in Sweden. He is past president (1999-2003) of the INQUA Commission on Sea Level Changes and Coastal Evolution, and leader of the Maldives Sea Level project. Dr. Mörner has been studying the sea level and its effects on coastal areas for some 35 years.

Dr. Morner is very unhappy with the claims being made about sea level rise. He says it is just not true and some people have even destroyed some of the evidence to cover up the fact it is not true. [[142]] RonCram (talk) 13:59, 16 December 2008 (UTC)[reply]

Why is a rise in tropical temps a fingerprint of AGW?

[edit]

John A. answered this question well saying:

The tropics receive the greatest insolation, thus any changes in heat trapped by the atmosphere would be most evident there. Furthermore, they have the least natural year-to-year variabilty, thereby making any secular changes more readily apparent. At latitudes above ~40 degrees, the temperatures are increasingly more dependent upon heat transported from the tropics by global circulation and vary much more, due to the chaotic features of that transport.
What never ceases to amaze me is the preoccupation with yearly rankings and with "trends" in records much shorter than the 88-yr Gleissberg cycle. Natural extremes have a strong tendency to cluster because of cyclical influences. I would suggest that we need records at least twice the length of the G. cycle to obtain meaningful estimates of small secular changes in temperature levels–those within a standard deviation of the year-to-year variabilty. The f-inverse noise model, though by no means proven to capture all the features of surface temperature variations, would suggest that the whole simplistic concept of linear trends is plainly inapplicable.[143] RonCram (talk) 22:03, 18 December 2008 (UTC)[reply]

AIG News

[edit]

David Stockwell wrote a nice article on the Hockey Stick controversy. McIntyre cited the Stockwell article in a comment to PNAS about the MAnn 08 paper. Also, Louis Hissink wrote a piece including a quote from Brignell that was very good.[144] RonCram (talk) 12:29, 31 December 2008 (UTC)[reply]

Soden presentation on water vapor feedback

[edit]

Without WV feedback, expected warming by 2100 is only 1 to 1.5C. But with WV feedback, warming is expected to be 1.5C to 3C. See Slide 7. [145] But Roy Spencer disagrees. RonCram (talk) 12:44, 31 December 2008 (UTC)[reply]

NASA chose not to implement software standards on ModelE or GISTEMP

[edit]

I noticed a very interesting comment by dishman on Climate Audit. [146] Evidently, NASA chose not to follow the normal software reference manuals for a GCM known as ModelE or for the surface station record known as GISTEMP. I do not know if an exemption is possible or if this choice not to follow the rules was unlawful. RonCram (talk) 23:09, 13 January 2009 (UTC)[reply]

New climate theory

[edit]

A new climate theory has been proposed based on observations, including those of Roy Spencer's new negative feedback. [147] RonCram (talk) 06:14, 17 January 2009 (UTC)[reply]

Hansen is criticized by former boss

[edit]

My own belief concerning anthropogenic climate change is that the models do not realistically simulate the climate system because there are many very important sub-grid scale processes that the models either replicate poorly or completely omit. Furthermore, some scientists have manipulated the observed data to justify their model results. In doing so, they neither explain what they have modified in the observations, nor explain how they did it. They have resisted making their work transparent so that it can be replicated independently by other scientists. This is clearly contrary to how science should be done. Thus there is no rational justification for using climate model forecasts to determine public policy. [148] RonCram (talk) 06:24, 29 January 2009 (UTC)[reply]

Global warming history is the story of chasing government funding

[edit]

The Amazing Story Behind the Global Warming Scam. [149] RonCram (talk) 16:46, 29 January 2009 (UTC)[reply]

Ocean Heat Content

[edit]

Josh Willis wrote an interesting piece. [150] Levitus 2005 [151] believes the earth's energy budget is out of balance. Levitus believes about 84% of the excess heat is stored in the ocean, about 5% heats the continents, about 4% is absorbed by the atmosphere and the remainder melts sea ice and glaciers. RonCram (talk) 04:00, 14 February 2009 (UTC)[reply]

The abstract of a paper by Carl Wunsch concludes "Used carefully, and with a residual open-mindedness, it is possible to employ some simple statistical tests to avoid the outcomes of inferring unusual behavior where none is present, nor in rejecting a major finding as being insignificant." [152] It seems a little strange this even has to be said but it certainly applies to Ocean Heat Content. The fact the ocean has not warmed since 2002 is a major finding. It is difficult to understand how certain scientists treat it as insignificant. RonCram (talk) 14:53, 14 February 2009 (UTC)[reply]

Japanese scientists part with IPCC

[edit]

Read this article. [153]RonCram (talk) 06:32, 27 February 2009 (UTC)[reply]

[edit]

Read this. [154]RonCram (talk) 12:38, 20 April 2009 (UTC)[reply]

On the "Divergence Problem"

[edit]

Tree rings are not good thermometers. [155] See also the post on WUWT. [156]RonCram (talk) 12:57, 21 May 2009 (UTC)[reply]

M&M on Santer 09

[edit]

Read the preprint. [157]RonCram (talk) 12:57, 21 May 2009 (UTC)[reply]