Connect with us

C2C Journal

A Planet that Might not Need Saving: Can CO2 Even Drive Global Temperature?

Published

45 minute read

By Jim Mason
Climate change has ingrained itself so deeply in the public consciousness that it’s likely an article of faith that the foundational science was all conducted, tested and confirmed decades ago. Surely to goodness, exactly how carbon dioxide (CO2) behaves in the atmosphere is “settled science”, right? That more CO2 emitted equals more heat and higher temperature is a cornerstone of the ruling scientific paradigm. And yet, finds Jim Mason, the detailed dynamics of how, when and to what degree CO2 transforms radiation into atmospheric heat are anything but settled. So much remains unknown that recent academic research inquiring whether CO2 at its current atmospheric concentrations can even absorb more heat amounts to breaking new ground in climate science. If it can’t, notes Mason, then further changes in CO2 levels not only won’t drive “global boiling”, they won’t have any impact on climate at all.

Electric Vehicles (EVs) – by which are usually meant battery-operated electric vehicles, or BEVs – have long been touted in many countries as the central element in the strategy for stopping global climate change. Canada is no exception. The Liberal government’s Minister of Environment and Climate Change, Stephen Guilbeault, has mandated that by 2030, 60 percent of all vehicles sold in Canada must be BEVs, rising to 100 percent by 2035. In anticipation of the accompanying surge in BEV sales, the federal and Ontario governments have offered huge subsidies to battery manufacturing companies. But now, EV sales are stagnating and automobile manufacturers that were rushing headlong into EV production have dramatically scaled back their plans. Ford Motor Company has even decided that, instead of converting its Oakville, Ontario plant to EV production, it will retool it to produce the Super Duty models of its best-selling – and internal combustion engine-powered – pickup truck line.

Heating up the rhetoric: “The era of global warming has ended; the era of global boiling has arrived,” UN Secretary-General Antonio Guterres (top left) has declared; prominent voices such as former U.S. Vice President Al Gore (top right) insist it’s “settled science” that “humans are to blame” for global warming; this view has been accepted by millions worldwide (bottom). (Sources of photos: (top left) UNclimatechange, licensed under CC BY-NC-SA 2.0; (top right) World Economic Forum, licensed under CC BY-NC-SA 2.0; (bottom) Takver from Australia, licensed under CC BY-SA 2.0)

A big part of the justification for forcing Canadians into EVs has been that we must “follow the science.” Namely, the “settled” science which holds that the planet’s atmosphere is heating dangerously and that humans are the cause of this via our prodigious emissions of heat-trapping gases – mainly carbon dioxide (CO2) – which are magnifying the atmosphere’s “greenhouse” effect. Over the past several decades the accompanying political rhetoric has also heated up, from initial concerns over global warming and a threatened planet – terms that at least accommodated political and scientific debate – to categorical declarations of a “climate emergency”. As UN Secretary-General Antonio Guterres asserted last year, “The era of global warming has ended; the era of global boiling has arrived.”

The foundational term “follow the science” is loaded, however. It is code for “follow the science disseminated by the UN’s Intergovernmental Panel on Climate Change (IPCC).” Article 1 of the UN’s Framework Convention on Climate Change actually defines climate change as “a change of climate which is attributed directly or indirectly to human activity”. Elsewhere the document clearly identities CO2 emitted through the burning of fossil fuels as the causative human activity. So the UN and IPCC long ago limited the scope of what is presented to the public as a scientific investigation and decided not only on the cause of the problem but also the nature of the solution, namely radically driving down “anthropogenic emissions of carbon dioxide and other greenhouse gases.”

The worldwide climate change movement has proved remarkably successful in creating what is known as a “ruling paradigm”. This phenomenon is common in many fields and not necessarily harmful. But in this instance, what is billed as a scientific endeavour has strictly limited the role of open scientific inquiry. The “science” that the movement wants humanity to follow is the result of inferential, inductive interpretation of empirical observations, declared to be “settled science” on the basis of a claimed consensus rather than as a result of controlled, repeatable experiments designed not to reinforce the paradigm but to test it for falsifiability, in accordance with the scientific method. This paradigm has allowed Guterres, for example, to claim that “for scientists, it is unequivocal – humans are to blame.” But it is missing (or attempts to exclude) a key element: rigorous experimentation that subjects the theory on which the paradigm is built to disinterested scientific scrutiny.

Following whose science? The UN’s Intergovernmental Panel on Climate Change (IPCC) defines climate change as being solely “attributed directly or indirectly to human activity,” particularly the burning of fossil fuels, thus limiting not only the scope of public discourse but pertinent scientific inquiry as well. (Sources: (left photo) Robin Utrecht/abacapress.com; (middle photo) Qiu Chen/Xinhua/abacapress.com; (graph) IPCC, 2023: Summary for Policymakers, Climate Change 2023: Synthesis Report)

Thankfully, some scientists still are conducting this kind of research and, for those who value and follow properly-done science, the results can be eye-opening. Two recent scientific papers appear of particular interest in this regard. Each is aimed at assessing the role of CO2 in influencing atmospheric temperature. Nobody doubts whether carbon dioxide is a greenhouse gas; the real questions are how much additional radiant energy CO2 is currently absorbing (such as due to the burning of fossil fuels) compared to the past, and what impact this has on the climate.

One might have thought such work would have been done 30 or more years ago but, apparently, it has not. That additional CO2 emitted into the atmosphere absorbs additional radiant energy, and that such additions do so in a linear fashion, are two of climate change theory’s critical premises. So it would seem crucial to establish scientifically whether gases emitted due to human activity – like CO2 – are capable of raising the Earth’s atmospheric temperature and, accordingly, justify their assigned role of causative agent in humanity’s planet-threatening villainy. The papers discussed below thus deal with questions of profound importance to climate change theory and the policy response of countries around the world.

If CO2 is not actually an effective current driver of atmospheric temperature, the implications are staggering.

How – and How Much – CO2 Traps Heat in the Atmosphere

The first paper developed a mathematically rigorous theory from first principles regarding the absorption of long-wavelength radiation (LWR) by a column of air as the concentration of CO2 (or other greenhouse gases such as water vapour, methane, ozone or nitrous oxide) increases in the atmosphere.

The Earth receives solar energy mainly in shorter wavelengths, including visible light. According to NASA, just under half of this incident radiation reaches the ground, where it is absorbed and transformed into heat. Of that half, just over one-third is radiated back into the atmosphere; about one-third of that amount is absorbed by heat-trapping gases, including CO2. (The air’s main constituents of oxygen and nitrogen are essentially transparent to both incoming visible radiation and outgoing LWR). Importantly, CO2 can only absorb meaningful amounts of LWR in two specific bands, but in these bands, it can absorb all of it.

The greenhouse effect, oversimplified and distorted: This seemingly easy-to-understand info-graphic downplays the fact that a large proportion of incoming solar radiation returns to space, omits the key fact that there would be no life on Earth without the natural greenhouse effect, and leaves out the most significant greenhouse gas of all: water vapour. (Source of image: EDUCBA)

The paper employs formulae whose explanation exceeds the scope of this article, but in simplified terms the theory predicts that at some concentration – designated as “C” – one-half of the LWR in the absorbable band is absorbed. Importantly, the theory postulates that the absorption of LWR does not increase in a linear fashion along with the increase in atmospheric CO2. Instead, as the gas concentration increases, the incremental amount absorbed decreases. At twice the C value – 2C – only three-quarters of the incident radiation would be absorbed. And at 3C, seven-eighths.

By 10C, 99.9 percent of the absorbable LWR is being absorbed. In effect, the atmosphere has become “saturated” with the gas from a radiation-absorption perspective and further increases in CO2 have negligible effect on absorption. This relationship is illustrated in Figure 1. As one can see, it is distinctly non-linear in nature, but is instead exponential, asymptotically approaching 100 percent absorption.

Figure 1: Theoretical graphical depiction of absorption of incident radiation as a function of the concentration of an absorbing gas, in this case forming the core of a theory concerning how long-wave radiation emitted from the Earth’s surface is absorbed by atmospheric CO2. (Source of graph: Jim Mason)

This graph could appear quite scary at first glance. After all, as more CO2 is added to the atmosphere, more LWR is being absorbed, suggesting more heat is being retained instead of escaping to outer space. So doesn’t the Figure 1 curve suggest that, with CO2 concentrations rising and potentially doubling during our era compared to pre-Industrial times, global warming is indeed occurring, CO2 is indeed the cause, and the IPCC’s warnings are justified? The answers depend on what the value of C actually is for CO2 and what the concentration of CO2 is in the atmosphere today. In other words, on where the Earth now sits along that curve, and where it sat when the pre-Industrial era came to an end. That, in turn, will require actual physical experiments – and these are covered later.

Figure 2: View from end of air column showing random positions of radiation-absorbing gas molecules, with red circles representing their associated radiation-absorbing cross-section. (Source of illustration: Jim Mason)

The non-linear LWR/absorption relationship can be understood conceptually as follows. Each physical COmolecule has what amounts to a surrounding area of radiation-absorption capability to specific bands of LWR. This area is “opaque” to those bands. The LWR rising from the Earth’s surface is absorbed if it travels onto this area; outside of it, it is not. The areas can be thought of as little opaque spheres around each molecule, which when viewed externally look like circles. The area of these circles is referred to as the radiation absorption cross-section.

Viewed from the end of the column of air, the circular cross-sections formed by all the CO2 molecules in the air column will effectively add up to some overall fraction of the air column’s cross-sectional area becoming opaque to the LWR. Radiation that strikes any of that area will be absorbed; radiation travelling through the rest of the column’s cross-sectional area will pass into space.

At some concentration of molecules – dubbed C in this essay, as mentioned – half of the column’s cross-section will be opaque and absorbing the incident LWR. This is illustrated in Figure 2. It is of relevance that because the gas molecules are randomly present in a column of air, when viewed from the end they will overlap; the overlapping areas cannot absorb the same radiation twice. C is the concentration at which the effective opaque area, taking into account all the overlapping, is one-half the column’s cross-sectional area.

If the gas concentration is then increased by C, i.e. is doubled, the new molecules will also have an associated opaque area equal to half of the column’s cross-sectional area. Half of this, however, will coincide with the half that is already opaque, so will have no impact. The other half, or one-quarter of the column’s cross-section, will become newly opaque and start absorbing LWR. If the concentration is again increased by C, the new molecules will also have a total opaque area equal to one-half the column cross-section, but three-quarters of this will coincide with already-opaque area so only one-quarter of that one-half, or one-eighth in total, will become new radiation-absorbing opacity.

Figure 3: Illustrative depiction of radiation-absorption cross-section illustrating how the transparent area, where additional molecules would be exposed to the radiant heat source and, therefore, would absorb radiation, is progressively reduced as more molecules are added; after several more iterations, this leads to radiation absorption “saturation” after which no further radiation is absorbed no matter how many more absorbing molecules are added, since all radiation in that wavelength band is already being absorbed. (Source of illustrations: Jim Mason)

This progression is illustrated in Figure 3, but is perhaps more easily visualized in Figure 4. Here, the half of the cross-sectional area of air column that was rendered opaque by the CO2 molecules is shown as being all on one side of the column. The opacity caused by each successive addition of C number of CO2 molecules is shown in a different colour and is positioned to highlight the impact on the remaining transparent cross-sectional area. As can be seen, each successive increase in concentration of C increases the amount of radiation absorption by decreasing amounts – by a factor of two. After ten such increases, the transparent fraction of the column would be reduced to 0.1 percent of its area so that 99.9 percent of the incident radiation is being absorbed.

Although the foregoing description is conceptually correct, a full understanding of natural processes and of the theory requires taking several other considerations into account, most of which are outside the scope of this discussion. One aspect that is important to understand: as mentioned above, CO2 and other greenhouse gases only absorb outgoing radiation efficiently in particular regions (bands) of the electromagnetic spectrum; they absorb little or none in other bands and are therefore “transparent” to any radiation in those bands. The above discussion applies to the regions of the spectrum where the gas can absorb radiant energy at 100 percent.

Figure 4: Alternative depiction of the reduction in incremental radiation-absorbing area as the absorbing gas concentration is increased in multiples of the concentration that absorbs 50 percent of the incident radiation. As in Figure 3, successive sets of molecules are indicated by red, orange and yellow with another set added, induced in green, while blue represents the remaining transparent area. (Source of illustration: Jim Mason)

Another aspect – which becomes important in the following section – is that the Earth’s surface temperature varies by location, weather, time of year and time of day. This will affect how much radiant energy goes out in various wavelengths in various places, and the absorbing gas’s absorption capacity. While the theory holds that this does not alter the basic non-linearity of absorption nor the “saturation” phenomenon, it could alter the point at which “C”, or 50 percent absorption, is reached – something that can be tested through experimentation.

Net of all this is that if the theoretical formulation is correct, one would expect to see a curve similar to Figure 1 for any individual greenhouse gas – or combination of gases – and a radiant energy source of any temperature, with the curve’s specific shape depending on the gas and/or mixture of gases and the temperature of the radiant energy source. From such a curve, it would be possible to determine the concentration at which the gas is absorbing 50 percent of the maximum energy that it will absorb when its concentration is increased to a very large value.

The paper that develops this theoretical formulation is entitled Dependence of Earth’s Thermal Radiation on Five Most Abundant Greenhouse Gases and was co-authored in June 2020 by William A. van Wijngaarden and William Happer. It is highly technical and would be very difficult for anyone without a strong background in mathematics and science, ideally physics, to understand. Accordingly, the accompanying figures in this article that illustrate the paper’s key ideas in a format accessible to the layperson were produced by me, using information derived from the paper’s figures and text.

Van Wijngaarden is a professor in the Department of Physics and Astronomy at York University in Toronto with a more-than 40-year academic track record and nearly 300 academic papers to his credit, while Happer is Professor Emeritus of Physics at Princeton University in New Jersey who had a 50-year-long academic career and nearly 200 papers to his credit. Both authors also have numerous related academic achievements, awards and organizational memberships, and have mentored hundreds of graduate students. Happer happens to be an open skeptic of the IPCC/UN climate change “consensus”, while Van Wijngaarden has testified in a court case that the IPCC’s climate models “systematically overstate global warming”, an assertion that is incontrovertibly true.

Although their paper was not peer-reviewed and, to date, has not been published in a major academic journal, and although both scientists have endured smears in news and social media as climate skeptics or “deniers”, there has not been any known attempt to refute their theory following publication, such as by identifying errors in the logic or mathematics of their theoretical formulation. Accordingly, their paper is in my opinion an example of good science: a coherent theory aimed at explaining a known phenomenon using rigorous scientific and mathematical principles and formulae, plus supporting evidence. It is also, critically, one that can be subjected to physical experimentation, i.e., is disprovable, as we shall soon see.

Running hot: William A. van Wijngaarden (top left) and William Happer (top right), two highly credentialed physicists with outstanding academic track records, are among scientists who are openly critical of the IPCC’s accepted climate models which, as 40 years of temperature observations have clearly shown, “systematically overstate global warming”. (Source of bottom graph: CEI.org)

This opinion is supported by the fact that the same phenomenon of non-linearity and absorption saturation, along with an associated equation referred to as the Beer-Lambert Law, is discussed by Thayer Watkins, a mathematician and physicist, and professor emeritus of economics at San José State University. “In order to properly understand the greenhouse effect one must take into account the nonlinearity of the effect of increased concentration of greenhouse gases,” Watkins notes. “The source of the nonlinearity may be thought of in terms of a saturation of the absorption capacity of the atmosphere in particular frequency bands.”

Subjecting the LWR Absorption Theory to Experimentation – Or, Science the Way it Should be Done

The second paper was published in March of this year and reports on experiments conducted to test van Wijngaarden and Happer’s theory, in accordance with the standard scientific method, using several different greenhouse gases. If the experiments were properly designed to realistically duplicate natural processes and if they then generated results inconsistent with the theory, then the van Wijngaarden/Happer theory could be considered disproved. If the experiments produced results consistent with the theory, the theory would not be proved but would increase in plausibility and justify further experimentation.

The experimental setup is depicted in Figure 5. It was designed to allow the concentration of COwithin a column of gas (in kilograms per square metre of column area, or kg/m2) to be varied in a controlled way and to measure the fraction of the incident radiation that is absorbed at any concentration. The “column” of gas was contained within a cylinder comprised of a 1-metre length of 150 mm diameter PVC pipe, with polyethylene windows at either end to allow ingress and egress of the radiation. CO2 concentration was changed by injecting measured amounts of the gas via a central valve. Water valves on the cylinder bottom were used to allow an identical volume of gas to escape, thereby maintaining the pressure in the cell. (The background gases into which the CO2 was mixed are unimportant since these remained constant, with only the COconcentration varied.)

Figure 5: Diagram of the laboratory setup for measuring the absorption of thermal radiation in CO2. (Source of illustration: Climatic consequences of the process of saturation of radiation absorption in gases, Figure 7)

The radiation source was a glass vessel with a flat side containing oil maintained at a constant temperature. Adjacent to the flat side was a copper plate with a graphite surface facing the gas cell. This ensured that the radiant source, as seen by the cell, was uniform in temperature over the cross-section of the cell and had the radiation profile of a black body at the chosen temperature. The selected temperatures of 78.6°C and 109.5°C were, states the paper, “chosen randomly but in a manner that allowed reliable measurement of radiation intensity and demonstrated the influence of temperature on the saturation mass value.”

Results for CO2 are illustrated in Figure 6, which is taken directly from the paper. The two selected temperatures are separately graphed. Figure 6 clearly shows experimental curves that are qualitatively the same as the theoretical curve in Figure 1 derived from van Wijngaarden/Happer’s paper and the equation noted in Watkins’ website discussion. From the graph it is possible to determine that the concentration of CO2 that results in absorption of 50 percent of the absorbable radiation – the value of C introduced earlier – is about 0.04 kg/m2 for a LWR temperature of 78.6 °C (the one that is closer to the actual average temperature of the Earth’s surface, which NASA lists as 15 °C).

Figure 6: Absorption of incident radiation versus concentration of CO2, with concentration expressed as a weight per cross-sectional area of atmospheric column (kg/m2), using two experimental LWR temperatures. Absorption is effectively measured as the fraction of the total incident radiation that is absorbed in the test column, which is determined by comparing it to an identical test column that maintains the zero-point concentration throughout. The reason that A saturates at less than 1 is because there are many wavelengths in the incident radiation that CO2 does not absorb, which pass through the column regardless of the CO2 concentration, with only the other wavelengths being absorbed. (Source of graph and mathematical formula: Climatic consequences of the process of saturation of radiation absorption in gases, Figure 8)

As the paper notes, the chosen temperatures, while higher than the Earth’s mean surface temperature, facilitate reliable measurements of the radiation intensities and clearly show the effect of temperature on the saturation mass value or, equivalently, the value of C. Specifically, the graphs clearly show that the value of C decreases as the temperature of the radiant source decreases (although with only two points, the nature of the relationship cannot be reliably determined). The implications are discussed in the following section.

This experimental paper is entitled Climatic consequences of the process of saturation of radiation absorption in gases and was co-authored by Jan Kubicki, Krzysztof Kopczyński and Jarosław Młyńczak. It was published in the journal Applications in Engineering Science and cites copious sources, though it does not appear to have been peer-reviewed. Kubicki is an assistant professor in the Institute of Optoelectronics in the Military University of Technology in Warsaw, Poland. Kopczyński appears to be a colleague at the same institution specializing in the atmospheric distribution of aerosols, while Młyńczak is an adjunct professor at the same institution. All three have authored or co-authored a number of scientific papers.

Is CO2 Even Capable of Driving Global Temperatures Higher?

According to a reputable atmospheric tracking website, on September 2, 2024 the Earth’s atmospheric CO2concentration was 422.78 parts per million (ppm). Each ppm worldwide equates to a total atmospheric weight of 7.82 gigatonnes (Gt). The cited concentration therefore amounts to 3,300 Gt of CO2 in the Earth’s atmosphere. Since the Earth’s surface area is 5.1 x 1014 m2, assuming a uniform CO2 distribution, this concentration can be translated into the units used in the above-cited experiment as 6.48 kg/m2 across the Earth’s surface.

This figure might appear at first glance to be a misprint, as 6.48 kg/m2 is approximately 160 times the CO2 C value of 0.04 kg/m2 – the concentration that absorbs 50 percent of the incident LWR. Six-point-six times the C value – the level that absorbs 99 percent of the incident LWR – is still only 0.264 kg/m2. Beyond this, further increases in gas concentration have no impact on absorption or, hence, on temperature. The Earth’s current concentration of CO2 is, accordingly, over 24 times as high as what is needed to achieve the 99 percent absorption value established by experimentation.

Long past the point of change? The Earth’s currently estimated CO2 concentration of 422.78 parts per million (ppm) is 27 times the estimated CO2 saturation level; even in the pre-Industrial era, CO2 concentrations were more than 12 times that level, suggesting the current rise in CO2 concentration is incapable of driving global temperature. (Source of graph: climate.gov)

The implications of this are quite staggering. According to climate.gov, the CO2 concentration in the pre-Industrial era was 280 ppm and prior to that it oscillated between about 180 ppm and 280 ppm. This means that even the pre-Industrial CO2 concentrations were between 64 and 100 times the C value, as well as being more than 10 times the concentrations needed to reach 99 percent absorption. The CO2 concentration, then, was saturated multiple times over with respect to LWR absorption. A glance back at Figure 2 once again makes it clear that at neither of these COconcentration ranges (covering present times and the pre-Industrial era) were the changes to CO2 concentration capable of having any substantive impact on the amount of LWR being absorbed or, consequently, on atmospheric temperatures – let alone the Earth’s whole climate.

Further, they probably never did. According to the published paper Geocarb III: A Revised Model of Atmospheric CO2 Over Phanerozoic Time, the COconcentration has never been less than 180 ppm during the so-called Phanerozoic Eon, which is the entire time during which all the rock layers in the geological column, from the Cambrian upwards, were deposited. So there has never been any point during this significant span of Earth’s history when the concentration of CO2 in the atmosphere was not “saturated” from a LWR absorption perspective. Consequently, throughout that entire period, if the new theory and recent experimentation are correct, changes in COconcentration have been incapable of having any discernible impact on the amount of LWR absorbed by the atmosphere – or, accordingly, on the global climate.

It’s true that increasing CO2 concentration could be capable of driving global atmospheric temperature higher – but only if it began at vastly lower concentrations than exist at present or at any known previous time. If such a time ever existed, it is long past. At the current order of magnitude in its concentration, CO2 simply appears not to be a factor. If further experimentation also generates results consistent with the van Wijngaarden/Happer theory, it would appear that COis incapable of having any impact on atmospheric temperature at all. It cannot, accordingly, be the primary source of “global warming” or “climate change”, let alone of a “climate emergency” or “global boiling”.

While experimental results consistent with a theory do not prove the theory to be true, and replication of the initial results by the three Polish researchers would be very desirable, the experimental results to date are certainly inconsistent with the current ruling paradigm that CO2 emissions from the burning of fossil fuels are the cause of current climate change and, indeed, that the effect of each increase in concentration is accelerating. According to the rules of decision-making in science, unless the experiment can be shown to be mal-designed or fraudulent, the inconsistency between experiment and theory proves, scientifically, that the current paradigm is false.

The incidence and recession of the Ice Age (top), the Medieval Warm Period (bottom left) and the more recent Little Ice Age (bottom right) are just a few examples of global temperature fluctuations that happened independently of the current era’s burning of fossil fuels or increasing CO2 levels. Shown at top, northern mammoths exhibit at the American Museum of Natural History’s Hall of North American Mammals; bottom left, peasants working on the fields next to the Medieval Louvre Castle, from The Very Rich Hours of the Duke of Berry, circa 1410; bottom right, Enjoying the Ice, by Hendrick Avercamp, circa 1615-1620. (Source of top photo: wallyg, licensed under CC BY-NC-ND 2.0)

Moreover, the new theory and experimental result are consistent with numerous empirical observations that are also inconsistent with the ruling IPCC/climate movement paradigm. Examples abound: the occurrence and recession of the Ice Age, the appearance and disappearance of the Roman and Medieval Warm Periods and the Little Ice Age – all without any CO2 from the burning of fossil fuels – the steady decline in atmospheric temperatures from 1940 to 1975 while CO2 levels steadily increased, and the relative flattening of atmospheric temperatures since about 2000 despite CO2 levels continuing to increase. But if the level of CO2 in the atmosphere has long been above “saturation”, then its variations have no real impact on the climate – as these observations indicate – and something else must have caused these climate variations.

To this can be added the failed predictions of the temperature models and of the dire consequences of not preventing further CO2 increases – such as the polar bears going extinct, the Arctic being free of ice, or Manhattan being covered by water, to list just a few. But again, if the atmosphere has long been CO2-saturated with respect to LWR absorption, then the additional CO2 will have no effect on the climate, which is what the failure of the predictions also indicates.

These results, at the very least, ought to give Climaggedonites pause, although probably they won’t. For the rest of us, they strongly suggest that EVs are a solution in search of a problem. It may be that the technology has a place. The alleged simplicity ought to have spinoff advantages, although the alleged spontaneous combustibility might offset these, and the alleged financial benefit might be simply the consequence of government subsidies. Left to its own devices, without government ideological distortions, the free marketplace would sort all this out.

Climate scare à la carte. (Source of screenshots: CEI.org)

More importantly, these results ought to cause politicians to re-examine their climate-related polices. As van Wijngaarden and Happer put it in their paper, “At current concentrations, the forcings from all greenhouse gases are saturated. The saturations of the abundant greenhouse gases H2O and CO2 are so extreme that the per-molecule forcing is attenuated by four orders of magnitude…” The term “forcings” refers to a complicated concept but, at bottom, signifies the ability (or lack) to influence the Earth’s energy balance. The words “forcings…are saturated” could be restated crudely in layperson’s terms as, “CO2 is impotent.”

Kubicki, Kopczyński and Młyńczak are even more blunt. “The presented material shows that despite the fact that the majority of publications attempt to depict a catastrophic future for our planet due to the anthropogenic increase in CO2 and its impact on Earth’s climate, the shown facts raise serious doubts about this influence,” the three Polish co-authors write in their experimental paper. “In science, especially in the natural sciences, we should strive to present a true picture of reality, primarily through empirical knowledge.”

If, indeed, the CO2 concentration in the Earth’s atmosphere is well beyond the level where increases are causing additional LWR to be absorbed and, as a consequence, changing the climate, then all government policies intended to reduce/eliminate CO2 emissions in order to stop climate change are just as effective as King Canute’s efforts to stop the tides. The only difference being that Canute was aware of the futility.

Jim Mason holds a BSc in engineering physics and a PhD in experimental nuclear physics. His doctoral research and much of his career involved extensive analysis of “noisy” data to extract useful information, which was then further analyzed to identify meaningful relationships indicative of underlying causes. He is currently retired and living near Lakefield, Ontario.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

C2C Journal

Net Gain: A Common-Sense Climate Change Policy for Canada

Published on

From the C2C Journal

By Robert Lyman
Most Canadians have come to agree that the federal carbon tax needs to go. But while the rallying cry “Axe the Tax!” has been a deadly partisan tool for Pierre Poilievre, it does not constitute a credible election campaign platform, let alone a coherent environmental policy for a new government. The Conservative Party needs to develop both, writes Robert Lyman. The election this past week of Donald Trump as U.S. President creates an urgency to remake Canada’s climate policy on more realistic, sensible grounds. Drawing upon the pragmatic, economics-driven approach of the Copenhagen Consensus, Lyman proposes a middle path that discards the uncompromising, self-destructive ideology of the Justin Trudeau government while recognizing that most Canadians won’t accept doing nothing.

The Justin Trudeau government has made reducing greenhouse gas emissions the pre-eminent goal of public policy. In 2021 it passed the Canadian Net-Zero Emissions Accountability Act, binding present and future governments to a process intended to achieve “net zero” emissions by 2050 and to set incremental five-year emission reduction targets and plans towards that end. Net zero essentially means eliminating almost all the greenhouse gas (GHG) emissions resulting from the consumption of hydrocarbons – crude oil, natural gas and coal – in the Canadian economy, and doing so within 29 years of the new law’s passage.

This presents an immense challenge and is effectively impossible in the intended timeframe. Canadians currently rely on fossil fuels to meet about 73 per cent of their energy needs. These energy sources provide services essential to Canadians’ incomes and wellbeing: secure, reliable and affordable heat, lighting and motive power to move people and goods, as well as the food, medicine and other critical services to sustain them. Without these energy sources, Canadians would all be far poorer, colder, less mobile and less able to compete in the global economy.

Impossible dream: With fossil fuels currently meeting 73 percent of Canada’s overall energy requirements and fulfilling critical needs from heating to medical services, getting to “net zero” emissions anytime soon seems delusional. (Sources of photos: (top two) Pexels; (bottom two) Unsplash)

At least four trends are coming together to make the present policy course untenable:

  1. The Canadian public is becoming far more aware of the financial costs of the emission reduction measures, including especially the impact of “carbon” taxes (technically, taxes on fossil fuel-related emissions of carbon dioxide (CO2)) and higher electricity rates from switching away from lowest-cost generating options. Federal climate-related spending, by the government’s own admission (see page 125 of the pdf version of the linked document), is now in the range of $20 billion per year, while the economic cost of working towards net zero has been credibly estimated at $60 billion per year.
  2. The public – notably young people and seniors – are becoming more aware of the effects of climate-related regulations and taxes on the cost of living, especially the cost of housing, and on employment opportunities.
  3. There is a wide and growing disparity between the promises of politicians to reduce emissions and what is actually happening; no national emissions “target” has ever been met or is likely to be met.
  4. Rapidly growing emissions in many developing countries (especially China and India), which now collectively generate 68 percent of the world’s total, demonstrate that net zero will not be achieved globally. Furthermore, reductions achieved regardless of cost in Canada (which produces approximately 1.5 percent of global emissions) will yield negligible global benefits in terms of temperature or weather.

The Temptation of a Different Kind of “Net Zero” Policy

Based on these trends, it might be argued that Canada should perform an immediate policy U-turn and cancel all federal measures founded upon any claim of impending climate catastrophe. This would give new meaning to the term “Net Zero Policy”: a government whose climate change policy is to have no policy. Enthusiasm for such an approach must, however, be tempered by the recognition that it runs counter to the position held by all the main political actors in Canada, including notably the mainstream media. Policy, like politics, best evolves in the realm of compromise and consensus.

“Axe the Tax” has its limits: Conservative Party leader Pierre Poilievre (top) has pledged to get rid of the hated consumer carbon tax and eliminate comprehensive electric vehicle mandates, but he’s expected to maintain the pricey “producer” carbon tax on industrial emitters. (Sources of photos: (top) The Canadian Press/Paul Daly; (middle) WSDOT, licensed under CC BY-NC-ND 2.0; (bottom) Shutterstock)

Thus, one should consider where might lie a “middle ground” that could garner the support not only of those strongly opposed to all elements of current policy – which can loosely be described as Conservative leader Pierre Poilievre’s core base – but also of moderates, i.e., people who do not doubt the general notion of climate change but who shy away from radical or ruinous policies to deal with it. This disparate category likely includes much of the business community, what used to be called “Red Tories”, some centrist Liberals disaffected with Trudeau and some working-class NDP voters suspicious of that party’s current direction.

Politics at its most basic will require that the Conservatives have something to put in their campaign platform entitled “climate change”, “emissions” or, more broadly, “the environment”. So far, Poilievre has been cobbling together policy ideas seemingly ad hoc. As practically every Canadian knows, he pledges to get rid of the consumer carbon tax – the one everyone pays at the gas pump or on their natural gas heating bill.

Less understood, however, is that Poilievre is widely believed to intend to maintain the “producer” carbon tax on industrial emitters – an equally steep, equally escalating levy that is burdening industry with billions of dollars in additional taxation. Additionally, Poilievre has promised to get rid of some major Liberal-imposed regulations – like the mandate to transition to entirely electric vehicle production by 2035 – but would rely even more heavily on other technocratic regulations at the industrial level.

Some of these policies make sense on their face; some might not make sense at all. What is clear, though, is that the Conservatives do not have a complete climate change and/or environmental policy – at least not one they have shared with the public. Eliminating the consumer carbon tax as an unfairly imposed cost and needless drag on the economy as well as a symbol of climate policy over-reach would be an important and politically popular way to demonstrate a more common-sense approach.

It is not enough, however, and it would leave a new government vulnerable to the accusation that it lacked a coherent and well-considered approach. Attempting to govern without a clearly articulated overall policy on climate would politically damage even a solid majority government; in a minority situation, it could be enough to destabilize the government altogether and prompt an early election.

A Better Way

There is a better way – a middle way between the current ideological approach and a no-policy-policy. It is inspired by the work of the Copenhagen Consensus Center. This ongoing project seeks to establish priorities for advancing global welfare in a range of areas, from battling diseases like malaria to advancing national economic development to addressing climate change, through methodologies based on welfare economics, which centres on cost-benefit analysis.* The Copenhagen Consensus was conceived and launched in the early 2000s by Bjorn Lomborg, the famous Danish environmentalist. In each policy area examined, subject matter experts present potential policy solutions, which are evaluated and ranked by a panel of economists, thus emphasizing rational prioritization through economic analysis.

In 2009 the Copenhagen Consensus assembled an expert panel to consider the best responses to climate change and rank them as priorities. The panel was asked to answer the question: “If the global community wants to spend up to, say $250 billion per year over the next 10 years to diminish the adverse effects of climate changes, and to do most good for the world, which solutions would yield the greatest net benefits?”

In the resulting report, the top priorities generally focused on investments in scientific research and technology development and commercialization, while measures to reduce CO2 emissions using currently available technologies were ranked lower, because these were found to incur high costs in relation to the expected environmental benefits. Of 15 possible policy measures to respond to climate change, the Copenhagen Consensus panel ranked carbon taxes the very worst – something of obvious relevance to Canada. Also of interest in the Canadian context was the experts’ strong endorsement of research into carbon storage (something that Alberta and Saskatchewan are very enthusiastic about), planning for adaptation and the expansion and protection of forests.

A better way: Founded by Danish environmentalist Bjorn Lomborg, the Copenhagen Consensus Center uses rational economic analysis to advance global welfare in areas from battling disease to addressing climate change. (Source of left photo: TED Conference, licensed under CC BY-NC 2.0)

The Copenhagen Consensus approach to climate policy presumes that human-induced climate change is occurring and that it probably will have adverse effects, but it contends that other social and environmental issues are more serious threats to humanity and should be addressed as higher priorities. Its careful analyses came to recognize the limitations of currently available technologies in achieving a cost-effective transformation of the global energy system. This is why it advocates prioritizing a significant increase in funding of basic science to accelerate the discovery and commercialization of new emission-reducing technologies. It also places priority on measures taken to adapt to (rather than seek to prevent) potential climate changes and to enhance the overall resiliency of the energy system.

Climate Change Policy Implications for Canada

The Copenhagen Consensus’ cost-benefit-based prioritization of climate change policies is applicable to Canadian policy-making and governance approaches in several important and broad areas, at not only the national but international and inter-provincial levels. What follows is a brief, simplified discussion of the most important aspects, keeping in mind that some of these are large issues in themselves and not resolvable overnight.

Remove the Pressure of Overly Ambitious and Arbitrary Targets

Canada has never met any of the targets set at the international or national levels regarding either the magnitude of emission reductions or the arbitrary dates by which these would be reached. The use of such arbitrary and unrealistic targets should be reduced or avoided. A first step in applying the Copenhagen Consensus’ recognition of the immense difficulty and complexity of achieving an energy transition, along with the need for new technologies whose development does not occur according to a government-controlled timetable, would be for Canada to postpone the “Net-Zero by 2050 goal” to at least 2070 if not 2100.

Adopt a Multi-Goal Framework

Canadian climate policy would henceforth be developed within a multi-goal public policy framework. Rather than making emission reduction the preeminent goal, the federal government would seek to optimize climate policy alongside multiple other public policy objectives including economic prosperity (growth, employment, investment and trade), social harmony, environmental quality, financial responsibility, energy security, defence and promotion of good federal-provincial and international relations, among others.

“Arbitrary targets”: Applying Copenhagen Consensus rational analysis would mean abandoning or postponing Canada’s “Net-Zero by 2050” goal and focusing instead on practical environmental improvement projects. Shown at bottom, the Gold Bar Wastewater Treatment Plant in Edmonton, Alberta. (Sources of photos: (top) JessicaGirvan/Shutterstock; (bottom) Urban Edmonton)

Prioritize the Real Environmental Problems

Despite what one reads and hears in the mainstream media, Canada has very high environmental quality and the areas that need improvement are relatively few. These include solid waste management, sanitation/wastewater treatment and sulphur dioxide emissions per unit of GDP. Most of these are provincial and/or municipal responsibilities, but the federal government can play a role in funding capital investments. Where the federal government has jurisdiction and must regulate, regulatory efforts should focus on addressing tangible environmental problems with practical, cost-beneficial, affordable solutions to further clean up the air, water and soil, and the results should be measured and tracked by comprehensible and publicly available metrics.

Adhere to Technological Realism

A common-sense approach would recognize that energy transitions take a long time. The pace of transition away from fossil fuels must, accordingly, be guided by the rate at which new scientific discoveries can be applied to the development of new products and services and then commercialized to the point of true economic viability. A common-sense policy approach in Canada would abandon the presumption that governments can and should attempt to hasten the technology commercialization process by “picking winners”, granting large subsidies to favoured firms or otherwise trying to centrally plan the changes in the energy economy. Instead, the new approach would entail higher levels of government funding for basic research and development.

Promote Energy Security and Reliability

A new Canadian climate policy would repeal or substantially amend the Clean Electricity Regulations that mandate the elimination of hydrocarbon-based electricity generation by 2035, a goal that this recent study concludes is completely unfeasible. It would also require that future federal or provincial regulation of GHG emissions be based upon a systematic review of the potential impacts on the viability and competitiveness of Canadian industry. Finally, it would eliminate the impending federal cap on oil and natural gas industry emissions (which was unveiled on November 4 and imposes a 35-percent rollback in GHG emissions by 2030) and take other measures to ensure that Canada, which has the world’s third-largest crude oil reserves as well as world-scale natural gas reserves, can continue to increase energy production to meet the needs of domestic and export markets.

The steep cost of compliance: The Justin Trudeau government’s 2030 Emissions Reduction Plan will add an estimated $55,000 to the average price of a new home, pointing to the need to eliminate costly and pointless regulation. (Source of photo: pnwra, licensed under CC BY 2.0)

Reduce Housing Costs

According to the Fraser Institute, the federal government’s 2030 Emissions Reduction Plan could add about $55,000 to the average cost of a new home built in Canada. Even more stringent and costly regulations would undoubtedly follow after 2030 to meet the net zero target. A new Canadian climate policy would abandon this plan and leave the establishment of building codes, zoning and construction approvals in the hands of provincial and municipal governments. This would contribute meaningfully to addressing Canada’s housing affordability crisis.

Legislate Wisely

A new policy would include amending or repealing the Canadian Net-Zero Emissions Accountability Act. The entire law is a litigation “trigger” because it gives climate activist organizations weapons that they can use to engage in “lawfare” – the strategic use of legal proceedings to hinder, intimidate or delay an opponent.

Depoliticize the Regulation of Energy Infrastructure Projects

A new policy would return the regulation of energy infrastructure and rate-making to one that takes place at arm’s length from government political and policy direction. This would require changes to the federal minister’s control of the Canadian Energy Regulator. It would also be highly desirable to reform the system of environmental assessment and review by placing strict time limits on the duration of infrastructure project reviews. Today, regulatory reviews of major energy projects often take five years or longer to complete, and some have taken over 10 years.

The federal Impact Assessment Act (having last year been found largely unconstitutional by the Supreme Court of Canada) would be substantially amended so that the resulting federal law returns to being a review of the national environmental impacts (and any local impacts as these pertain to areas of clearly federal jurisdiction) rather than an exercise in jurisdictional duplication and an assessment of consequences for the entire planet.

A common-sense climate change policy would also streamline, limit the scope of and quicken the currently often 10-year-long environmental assessment process. Shown, the LNG Canada project in Kitimat, B.C. under construction, January 2024. (Source of screenshot: Northcoast Drone/YouTube)

The principle of “whoever hears the evidence should decide” would be brought back into the law, with an appeal to the courts on a question of law only and an appeal to the federal Cabinet on a question of policy. This is how the Canadian Radio-television and Telecommunications Commission (CRTC) has worked for several decades.

The arbitrary and harmful bans on oil tanker traffic on the Pacific Coast and on new hydrocarbon exploration and development in Canada’s Far North would be removed.

Promote Federal-Provincial Harmony

In the pre-2000 period, federal climate policy explicitly recognized that measures should not entail undue costs and burdens on any region or province. This went out the window in the Trudeau era and became a leading cause of federal-provincial discord. A new policy would re-institute this as a cardinal principle. Among other things, it would also be essential to ensure that there was ample coordination and consultation with all affected provinces before any new international commitments were made.

Focus on harmony: To promote more efficient cross-border trade, Canada’s regulatory standards should align with those of the U.S. The incoming Donald Trump Administration is likely to discard electric vehicle mandates and “clean” fuel standards, policy shifts that will affect Canada. (Sources of photos: (top) AP Photo/Evan Vucc; (bottom) Sundry Photography/Shutterstock)

Harmonize Canadian and United States Regulatory Regimes

It would be recognized that to facilitate more seamless cross-border trade with Canada’s largest trading partner, the United States requires that regulatory standards and codes developed in Canada, especially involving the regulation of fuel efficiency/emissions intensity of vehicles and appliances, be closely aligned with U.S. federal standards. It is widely expected that the incoming Trump Administration will discard electric vehicle mandates and “clean” fuel standards, policy shifts that clearly will affect Canada. Although this is not to suggest that Canada allow its policies to be dictated by the U.S., close attention should be paid.

Facilitate Truly Responsible Investing

Canada has committed to adopting the new Sustainability Disclosure Standard under International Financial Reporting Standards (IFRS), which imposes mandatory sustainability-related disclosure and climate-related financial disclosure. These and similar regulatory initiatives are increasing the burden on Canadian firms to report not only their own estimates of GHG emissions but also to try to guess those of their suppliers and customers. This is absurd on its face and creates another trigger for endless litigation when such guesses turn out wrong, prompting accusations of fraud. A new Canadian climate policy would severely restrict the use of such accounting measures.

Build Adaptation and Resilience

A new Canadian climate policy would place greatly increased, perhaps primary, emphasis on measures to increase the resilience of Canadian infrastructure and economy to future climate changes. Adaptation measures can avoid or reduce adverse future impacts by, for example, changing human behaviour in advance, such as land use rules that prohibit construction of buildings in flood-prone areas, or by taking actions to protect valued resources, communities and landscapes. Many adaptation measures also increase resilience towards climatic variability such as droughts and storms, making them potentially attractive policies even in the absence of long-term human-induced changes. They can pay dividends to society even if all the concerns about climate change turn out to be greatly exaggerated.

A new climate change policy should include measures to increase the resilience of Canadian infrastructure and the economy to future climate changes. Shown, (at top) a storm in coastal Nova Scotia; (at bottom) flooding in B.C.’s Lower Mainland. (Sources of photos: (top) The Canadian Press/Andrew Vaughan; (bottom) The Canadian Press/Jonathan Hayward)

Who Might Implement the Copenhagen Consensus in Canada?

It is clear that the Trudeau government is incapable of such a significant policy reform as summarized above. It is at least conceivable that, were Trudeau to be replaced before the next election, his successor might consider some of these measures; conceivable, but not likely. Most probably, the task of implementing such broad policy changes would fall to a new Conservative federal government. The party’s promises to “Axe the Tax” correctly address the mounting public concern about the impact of carbon taxes on the cost of living and competitiveness of Canadian business, as well as the unfairness with which they have been applied.

Fairly soon, however, the current Official Opposition is likely to take on the responsibility of actually governing. To respond effectively to the economic and political threats posed by climate catastrophism, advocates of policy change must go beyond merely targeting individual policies for cancellation based on complaints about the harm they do. They must think through what a realistic, credible, politically palatable – and cost-effective – climate policy framework would look like. The time to start is now.

*Cost-benefit analysis is a tool economists use to compare the estimated costs and benefits (or opportunities) associated with a proposed undertaking. It involves tallying up all the current and projected long-term costs and benefits, estimating the financial equivalent of those for which dollar equivalents are not available, and converting everything into present-value terms using discount rates. If the costs outweigh the benefits, then the decision-makers should rethink whether to proceed.

Robert Lyman is a retired energy economist who served for 25 years as a policy advisor and manager on energy, environment and transportation policy in the Government of Canada.

Continue Reading

C2C Journal

Drinking by the Numbers: What Statistics Canada Doesn’t Want You to Know

Published on

From the C2C Journal

By Peter Shawn Taylor
“The secret language of statistics, so appealing in a fact-minded culture, is employed to sensationalize, inflate, confuse, and oversimplify,” cautioned journalist Darrell Huff in his famous 1954 book How to Lie with Statistics. It’s still useful advice, although Canadians might hope such a warning isn’t required for the work of Statistics Canada. In an exclusive C2C investigation, Peter Shawn Taylor takes apart a recent Statcan study to reveal its use of controversial, woke and unscientific methods to confuse what should be the straightforward task of reporting on the drinking habits of Canadians in various demographic groups. He also uncovers data the statistical agency wants to keep hidden for reasons of “historical/cultural or other contexts”.

Statistics Canada would like to know how much you’ve been drinking.

In October, the federal statistical agency released “A snapshot of alcohol consumption levels in Canada” based on its large-scale 2023 Canadian Community Health Survey that asked Canadians how much they drank in the previous week. The topline number: more than half of those surveyed – 54.4 percent – said they didn’t touch a drop in the past seven days. This is considered “no risk” according to the Canadian Centre on Substance Abuse and Addiction’s (CCSA) 2023 report Canada’s Guidance on Alcohol and Health, which Statcan uses as its standard. Among those who did imbibe, 15.2 percent said they’d had one or two drinks in the last week, an amount the CCSA guidance considers “low risk”, 15.2 percent said they’d consumed between three and six drinks, considered by CCSA to be “moderate risk”, and the remaining 15.1 percent admitted to seven or more drinks per week, what the CCSA calls “increasingly high risk”.

Statcan then sliced this information several different ways. By gender, men reported being bigger drinkers than women, based on their relative share in the “high risk” category (19.3 percent versus 11.1 percent). By age, the biggest drinkers are those 55-64 years, with 17.4 percent consuming at least one drink per day. Perhaps surprisingly, the 18-22-year-old college-aged group reported the lowest level of “high risk” drinking across all ages, at 8.4 percent, an outcome consistent with other observations that younger generations are becoming more conservative.

Statcan’s data also reveals that Quebeckers are the biggest drinkers in the country with 18.1 percent in the “high risk” category, while Saskatchewan and New Brunswick had the greatest number of teetotalers. Rural residents are bigger drinkers than those living in urban areas. By occupation, those holding male-dominated jobs in the trades, equipment operation and transportation were the most likely to report drinking in the “high risk” category of seven or more per week. Finally, the richest Canadians – those in the top income quintile – said they drink more than Canadians in lower income quintiles, an outcome that seems logical given the cost of a bottle these days.

The demographic detail in Statcan’s alcohol consumption survey is extensive and largely in keeping with general stereotypes. The quintessential drinker appears to be a middle-aged blue-collar male living in rural Quebec. (Although the report notes an enormous discrepancy between self-reported consumption data and national alcohol sales, with self-reported amounts accounting for a mere one-third of actual product sold. This suggests many Canadians are far from truthful when describing how much they drink.)

Despite the apparent surfeit of information, however, several demographic categories are missing from Statistics Canada’s report. And not by accident. According to a “Note to readers” at the bottom of the October report, the survey “included a strategic oversample to improve coverage…for racialized groups, Indigenous people, and persons with disabilities. While this analysis does not contain results for these populations (primarily owing to the need to delve into historical/cultural or other contexts for these groups as it pertains to alcohol consumption), the Canadian Community Health Survey 2023 data is now available to aid researchers looking into health analysis for these populations.”

The upshot of this word salad: Statcan went to extra lengths to get high-quality information on the alcohol consumption of natives, visible minorities, immigrants and people with disabilities. And then it enshrouded these numbers in a cloak of secrecy, choosing not to release that information publicly because of “historical/cultural or other contexts”. Why is Canada’s statistical agency keeping some of its data hidden?

Canada’s Guidance on Alcohol and Health

Before investigating the missing data, it is necessary to discuss a controversy regarding the alcohol consumption guidelines used by Statcan. As mentioned earlier, its survey is based on new CCSA standards released last year which consider seven or more drinks per week to be “increasingly high risk”. This is the result of recent CCSA research that claims “even a small amount of alcohol can be damaging to health.” By focusing on the incidence of several obscure cancers and other diseases associated with alcohol consumption, the CCSA recommends that Canadians cut back drastically on their drinking. For those who wish to be in the “low risk” group, the CCSA recommends no more than two drinks per week for men and women, and not downing both on the same day.

To your health: The “J-Curve” plots the well-documented relationship between moderate social drinking and a long lifespan, revealing the healthiest level to be around one drink per day, what the new CCSA standards call “high risk”.

Such a parsimonious attitude towards drinking is at sharp odds with earlier CCSA findings. In 2011, the CCSA released “Canada’s Low Risk Alcohol Guidelines”, which defined “low risk” drinking levels very differently. Under this older standard, Canadians were advised to limit their consumption to 15 drinks per week (10 for women) and no more than three per day. It also acknowledged that it was okay to indulge on special occasions, such as birthdays or New Year’s Eve, without fear of any long-term health effects.

These rules were based on ample medical evidence pointing to substantial health benefits arising from moderate drinking, given that social drinkers tend to live longer than both abstainers and alcoholics – a statistical result that, when placed on a graph, yields what is commonly referred to as the “J-Curve”. These rules also aligned with social norms and hence garnered broad public support.

The dramatic contrast between the 2011 and 2023 CCSA drinking guidelines has attracted strong criticism from many health experts. Dan Malleck is chair of the Department of Health Sciences at Brock University in St. Catharines, Ontario, as well as director of the school’s Centre for Canadian Studies. In an interview, he bluntly calls the new CCSA guidelines “not useful, except as an example of public health over-reach.” Malleck argues the emphasis CCSA now places on the tiny risk of certain cancers associated with alcohol ignores the vast amount of evidence proving moderate drinking confers both physical and social advantages. This, he says, does a disservice to Canadians.

“The opposite of good public health advice”: According to Dan Malleck, chair of Brock University’s Department of Health Sciences, the Canadian Centre on Substance Abuse and Addiction’s (CCSA) 2023 guidelines suggesting alcohol in any amount is a health hazard are unrealistic. (Source of photo: Brock University)

“The Opposite of Good Public Health Advice”

“There are two possible responses” to the CCSA’s new drinking guidelines touting near-abstinence as the preferred course of action, Malleck says. “People will hear the message that no amount of drinking is healthy and simply ignore the recommendations altogether because they’re so restrictive – and so we end up with no effective guidance. Or they’ll take it all at face value and become fearful that having just two beers a week will give them cancer. Creating that sort of anxiety isn’t useful either.” Considering the two alternatives, Malleck says the end result “is the opposite of good public health advice.”

Perhaps surprisingly, it appears Ottawa agrees with this assessment. While the CCSA is a federally-funded research organization, it is not a branch of the civil service. As such, its work does not automatically come with an official imprimatur. Rather, its reports have to be adopted by Health Canada or another department to become government policy. This was the case with its 2011 guidance. It is not the case with CCSA’s new report.

In response to a query from C2C, Yuval Daniel, director of communications for Ya’ara Saks, the federal minister of Mental Health and Addictions, stated that, “The Canadian Centre for Substance Abuse and Addiction’s proposed guidelines have not been adopted by the Government of Canada. Canada’s 2011 low-risk alcohol drinking guidelines remain the official guidance.”

Too strict even for the Liberals: Federal Mental Health and Addictions Minister Ya’ara Saks has chosen not to adopt the CCSA’s 2023 drinking guidelines as official policy – yet Statistics Canada insists on using them to measure Canadians’ drinking habits. (Source of photo: The Canadian Press/Adrian Wyld)

It seems the CCSA’s new and abstemious drinking guidelines are too strict even for the federal Liberals. The 2011 standard, which considers anything up to 15 drinks per week to be “low risk”, remains the government’s official advice to Canadians. While this seems like a small victory for common sense, it raises another question: if the federal government has refused to adopt the strict 2023 CCSA drinking standards, why is Statcan using them in its research?

According to Malleck, the appearance of the new, unofficial CCSA alcohol guidance in Statcan’s work “legitimizes” the explicitly-unapproved guidelines. “It further reinforces these seemingly authoritative, government-funded recommendations” and obscures the sensible, official advice contained in the earlier guidelines, he says. It seems a strange state of affairs. But given other odd aspects of Statcan’s alcohol survey, it is in keeping with an emerging pattern of problematic behaviour at the statistical agency. Statcan is no longer merely gathering information and presenting it in an objective way, to be applied as its users see fit; the agency appears to be crafting its own public policy by stealth.

Uncovering the Missing Data

Recall that Statcan’s recent alcohol survey withheld consumption data regarding racial, Indigenous and disabled status for reasons of “historical/cultural or other contexts”. Although the statistical agency collected the relevant numbers, it then restricted access to researchers “looking into health analysis for these populations.” As a media organization, C2C requested this data on the grounds it was public information. After some back-and-forth that included the threat of a $95-per-hour charge to assemble the figures, Statcan eventually provided the once-redacted numbers for free. With the data in hand, it seems obvious which numbers were withheld and why.

Nothing about alcohol consumption by immigrant status or race appears newsworthy. Immigrants are revealed to be very modest drinkers, with 68 percent reporting no alcohol consumed in the past week, and only 7 percent admitting to being in the “high risk” seven-drinks-per-week category. Similar results hold for race; Arab and Filipino populations, for example, display extremely high rates of abstinence, at 88 percent and 80 percent, respectively. Disabled Canadians are also very modest drinkers.

The only category that seems worthy of any comment is that of Indigenous Canadians. At 20.1 percent, aboriginals display one of the highest shares of “high risk” drinkers in the country.

Out of sight, out of mind: Statcan’s recent report on alcohol consumption deliberately withholds data on Indigenous Canadians for reasons of “historical/cultural and other contexts”. (Source of photo: AP Photo/William Lauer, File)

According to Malleck, Statcan’s reference to “historical/cultural or other contexts” in withholding some drinking data is a clear signal the move was meant to avoid bringing attention to Indigenous people and their problematic relationship with alcohol. “A lot of people will now err on the side of caution when it comes to this kind of information [about Indigenous people],” he says. This is a phenomenon that has been building for some time. Nearly a decade ago, the 2015 Truth and Reconciliation Commission’s Calls to Action made numerous demands about how governments and universities deal with Indigenous knowledge and history. “I can see the people at Statcan saying that this [new data] will play into the so-called ‘firewater myth’ and be too damaging culturally to justify its inclusion,” Malleck adds.

“The Unmentioned Demon”

It is certainly true that Canada’s native population has been greatly damaged by alcohol since the beginning of white settlement in North America. As early as 1713 the Hudson’s Bay Company told its staff at Fort Albany, in what is now northern Ontario, to “Trade as little brandy as possible to the Indians, we being informed it has destroyed several of them.”

Later, the pre-Confederation era featured many legislative efforts to limit native access to alcoholic spirits. Further, one of the purposes behind the creation of Canada’s North West Mounted Police (NWMP) was to interdict American whiskey traders at the U.S. border to prevent them from selling their wares to Canadian tribes, who were suffering catastrophically under alcohol. The NWMP were notably successful in that mission, earning the fervent gratitude of prominent Indigenous chiefs on the Prairies. More recently, the topic of alcoholism on native reserves has been the subject of several books, including former Saskatchewan Crown prosecutor Harold Johnson’s powerful 2016 work Firewater: How Alcohol is Killing my People (and Yours).

Canada’s native community has struggled with alcohol abuse ever since white settlement began. Many federal policies have attempted to address this, including the creation of Canada’s North West Mounted Police (NWMP) in 1873. Shown, NWMP officer with members of the Blackfoot First Nation outside Fort Calgary, 1878.

With all this as background, it should not come as a surprise that Indigenous communities continue to struggle with high rates of alcohol use and abuse. In fact, such detail is easily accessible from other government sources. The federal First Nations Information Governance Centre, for example, reveals that the rate of binge drinking (five drinks or more in a day, at least once per month) among Indigenous Canadians is more than twice the rate of the general population – 34.9 percent vs. 15.6 percent. Reserves and Inuit communities also display extremely high rates of Fetal Alcohol Syndrome Disorder(FASD), which is caused when pregnant mothers drink. Some research shows FASD rates are 10 to 100 times higher among Indigenous populations than the general Canadian population. This C2C story calls FASD “the unmentioned demon that haunts the native experience throughout Canada.”

Given all this readily available information, it makes little sense for Statcan to collect and then withhold data about Indigenous drinking. Such an effort will not make the problem go away, nor change public perceptions. Indeed, the only way to reduce alcoholism on reserves and among urban native communities is to confront the situation head-on. The first step in Alcoholics Anonymous’ 12-step recovery program is, notably, admitting to the existence of the problem itself.

With regard to sensitivity about identity, Statcan showed no qualms about labelling Quebeckers as being the thirstiest drinkers in the country. Or that men employed in the trades, equipment operation and transportation tend to kick back with a beer more than twice a week. Further, Indigenous Canadians are not even the country’s biggest imbibers. That distinction belongs to the top quintile of income-earners, with 21.5 percent of Canada’s highest earners in the “high risk” category.

Habs fans at work: While Statcan appears unwilling to publish data revealing that Indigenous Canadians are among the biggest drinkers in Canada, it has no such qualms about identifying Quebec as Canada’s thirstiest province. (Source of photo: CTV News Montreal)

This effort to spare Indigenous Canadians the ignominy of being recognized as among the country’s biggest drinkers, even after devoting more time and effort to researching their habits, follows a 2021 federal Liberal directive that requires Statcan to spend more resources on certain targeted groups. The $172 million, five-year Disaggregated Data Action Plan (DDAP), which is referenced in the alcohol report’s footnotes, is an effort to collect more detailed data about Indigenous people, women, visible minorities and the disabled “to allow for intersectional analyses, and support government and societal efforts to address known inequities and promote fair and inclusive decision-making.”

Setting aside the tedious terminology of the diversity, equity and inclusion (DEI) movement, it may well be a reasonable policy goal to collect more and better information about underprivileged groups. With better information comes greater knowledge and, it can be hoped, an improved ability to plan. But such efforts are for naught if this additional data is then hidden from public view because it might cast favoured groups in a bad light.

Ottawa’s $172 million Disaggregated Data Action Plan (DDAP), unveiled in 2021, is meant to collect and distribute more detailed data on targeted groups including women, Indigenous people and the disabled. It doesn’t always work as promised.

Canada’s Statistical Agency Goes Random

The apparent data damage arising from the new DDAP is not limited to hiding results about Indigenous Canadians. It is also affecting results by gender. Recall that the October alcohol consumption report reveals a clear male/female split in drinking habits, with men drinking substantially more than women. On closer inspection, however, this distinction refers only to self-reported gender identity – not to biological sex. As a result of a separate 2018 directive, the statistical agency is now forbidden from asking Canadians about their sex “assigned” at birth.

This is in keeping with woke ideology favoured by the federal Liberals that regards gender as a social construct separate from biology. But such a policy entails several significant problems from a statistical point of view. For starters, it makes it difficult to compare results with previous years, when gender was defined differently. According to Statcan, this is no big deal: “Historical comparability with previous years is not in itself a valid reason to be asking sex at birth.” These days, ideology matters more than statistical relevance, even to those who once held sacred the objective gathering of high-quality data.

This new policy also means that in situations where biological sex is crucial to interpreting the data – health issues, for example – the results are now muddied by the conflation of gender with sex. This is particularly relevant when it comes to self-identified transgender or non-binary individuals. In following the new rules set out by the DDAP, Statcan now takes all transgender and non-binary responses and shuffles them arbitrarily between the male and female categories – what have since been renamed as Men+ and Women+. As Statcan itself reports, this data is “derived by randomly distributing non-binary people into the Men+ or Women+ category; data on sex at birth is not used in any steps of this process.”

Anti-scientific: As a result of the DDAP, Statcan now randomly distributes responses from people who self-identify as transgender or non-binary into its Men+ and Women+ categories, making a mockery of good statistical practice. (Source of photo: Shutterstock)

In other words, Statcan is now randomly allocating the responses it receives from anyone who says they are transgender or non-binary into the Men+ and Women+ categories. Transgender women who remain biological men may thus be included together with other biological women. Doing so is, of course, entirely unscientific. Randomizing data points that have been carefully collected undermines the entire statistical process and weakens the usefulness of any results. Taken to the extreme, such a policy could produce such medical data absurdities as rising rates of prostate cancer among Women+ or a baby boom birthed by Men+. Consider it a triumph of wokeness over basic science and math.

Statistical Irrelevance in Three Easy Steps

As its work becomes more overtly political and ideological following nearly a decade under the Justin Trudeau government, Statistics Canada is endangering its own reputation as a reliable and impartial source of data. The October survey on alcohol consumption contains three examples of this lamentable slide into incoherence which, if not halted promptly, will lead to growing irrelevance.

First is the presentation of controversial new CCSA alcohol consumption guidelines as an official standard by which Canadians should measure their alcohol use. In fact, these guidelines have no federal standing whatsoever; the actual official standards are much more permissive. It is not clear why Statcan would promote these unofficial and scientifically dubious recommendations. In effect, the agency has teamed up with a temperance-minded organization that seems determined to convince Canadians they are drinking too much booze.

This party can’t last forever: Statcan’s recent survey on Canadians’ drinking habits reveals the many ways in which the statistical agency is becoming increasingly ideological in how it collects (and hides) data. If left unchecked, this will eventually lead to its irrelevance as a source of reliable information. (Source of photo: CanadaVisit.org)

Second, Statcan wants to prevent Canadians from having ready access to information about alcohol consumption by Indigenous Canadians. This may be the result of some misconstrued sense of sympathy or obligation towards native groups. In doing so, however, the statistical agency is hiding an important public policy imperative from the rest of the country. It should be the job of Canada’s statistical agency to collect and distribute high-quality data that is relevant to the Canadian condition regardless of whether the resulting inferences are for good or ill. While the $172 million DDAP program was promoted as the means to shine a brighter light on issues of concern for marginalized groups, it now appears to be working in reverse – hiding from public view issues that should concern all Canadians.

Finally, Statcan’s gender-based data collection policy is doing similar damage – and could do vastly more in the future as long-term datasets become ever-more degraded. Also based on the Liberals’ Disaggregated Data Action Plan, the agency now collects responses from Canadians who identify as transgender and non-binary and then randomly allocates these between its Men+ and Women+ categories, undermining the quality and reliability of its own work. While the actual numbers for nonbinary Canadians may be perishingly small, such a flaw should be a big deal for anyone who cares about rigorous statistical validity. And surely Statistics Canada should care.

Peter Shawn Taylor is senior features editor at C2C Journal. He lives in Waterloo, Ontario.

Continue Reading

Trending

X