Opinion
Budget 2019 – Don’t spend your new Canada Training Credit just yet
On March 19, 2019, the federal government tabled its election-year budget. One of the new provisions is a refundable credit called the Canada Training Credit. However, the $250 credit won’t even be available until you file your 2020 income tax return in April of 2021.
Further, if you are born in 1995 or later, you won’t qualify yet. If you were born in 1954 or earlier, you would never be eligible.
In addition, the maximum benefit you can receive is $5,000 in a lifetime (which will take 20 years to get at $250 a year) and the benefit can only be used to a maximum of 50% of eligible tuition costs.
So let’s consider the following scenario:
It is 2019 – you are 25 years of age making $27,000 a year and file your taxes every year.
You decide to take advantage of this credit and enroll in your first semester of schooling in the fall of 2023.
According to Statistics Canada, the average Canadian undergraduate pays $3,419 per semester.
So, you take time off work to go to school full-time in the fall, thus reducing your income by 1/3 in the year to $18,000.
Under the current 2019 rules, you would only have $39 in federal income tax. This amount is low because the tuition credits reduce your taxes.
By 2023, you have built up a “pool” of $250 per year after you turned 26, and believe you have a $1,000 pool available for that year.
When you file your 2023 return the $1,000 is triggered as a refundable tax credit. But you won’t be getting $961 back ($1,000 – 39).
Here’s the catch:
The $1,000 pool reduces the amount you can claim for tuition credits as well, which changes the tax owing to $189 Federal income tax. Meaning the $1,000 pool that you waited for is reduced by 15% by the time you pay it out.
Cash in jeans: $811.
But what if the course you decided to go into begins in January of 2023? You go for the January-April semester, work from May-August, and attend school September-December.
Using the same $27,000 – your income is now reduced by 2/3 while attending full time. Your income is only $9,000 as a result of the May-August period.
Your tuition (possibly paid through student loans) is $6,838 for the year.
Your tax is now zero because even before tuition credits you are below the Basic Personal Amount in your earnings.
Does this mean you get the full $1,000?
No.
Because your income is less than $10,000 in 2023, you don’t get the $250 for that year. As such, you only get $750, and your tuition credits available for carryforward are reduced by $750 as well, thus having a future negative impact on tax of $112.50.
Net result: $637.50 cash in jeans
What if you are a parent that decides to stay home with the kids until they are in school full time and go back to school in 2023?
Unfortunately, because you did not make more than $10,000 a year in any of the years, you get zero.
What if you were laid off, collecting regular EI benefits, and decide to go back to school?
Regular EI Benefits don’t qualify for the $10,000 income calculation. As a result, unless you had special EI benefits like parental leave or earned income from another source greater than $10,000, you don’t qualify.
What if you were self-employed through a small business corporation and paid yourself dividends instead of wages and then decided to upgrade your training?
Your dividend income does not qualify, and so you are not eligible for amounts to be added to the pool.
So assuming you qualify, and you wait the four years to build up a pool of $1,000 (remember that the $1,000 is only a net $850 because of the reduction in tuition credits). That same Statistics Canada report says that tuition is increasing at 3.3% per year. That means by you waiting four years so you can get the Net $850 means your annual tuition has likely increased from $6,838 to $7,786 ($948).
You waited four years, and the tax amount you receive won’t even cover the inflationary price increase on tuition.
In Conclusion
- Those that do qualify won’t see anything until April 2021; the actual net amount of what they will see is only $212.50; and their annual tuition will likely have increased by $225.65.
- Students under the age of 25 will see nothing;
- People over the age of 25 that don’t have more than $10,000 of income will see nothing;
- Seniors will see nothing;
- Parents looking to re-enter the workforce will see nothing; and
- People who have been laid off and have less than $10,000 of non-EI income will see nothing.
Seems like a lot of complex legislation for nothing.
—
Cory G. Litzenberger, CPA, CMA, CFP, C.Mgr is the President & Founder of CGL Strategic Business & Tax Advisors; you can find out more about Cory’s biography at http://www.CGLtax.ca/Litzenberger-Cory.html
Artificial Intelligence
The Emptiness Inside: Why Large Language Models Can’t Think – and Never Will
This is a special preview article from the:
Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even
better, can independently reason their way to knowing.
This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs
“hallucinations”.
But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and
we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.”
Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more
convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even less fidelity to fact.
Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!”
This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims).
LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions.
Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture.
The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness.
Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s Founding Fathers as a lineup of “racialized” gentlemen.
Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking:
Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity;
Causal AI: learns cause-and-effect relationships and can answer “what if” questions;
Neuro-symbolic AI: combines neural prediction with logical reasoning; and
Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error.
Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot.
LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents.
The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will.
The original, full-length version of this article was recently published as Part I of a two-part series in C2C Journal. Part II can be read here.
Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.
armed forces
Global Military Industrial Complex Has Never Had It So Good, New Report Finds

From the Daily Caller News Foundation
The global war business scored record revenues in 2024 amid multiple protracted proxy conflicts across the world, according to a new industry analysis released on Monday.
The top 100 arms manufacturers in the world raked in $679 billion in revenue in 2024, up 5.9% from the year prior, according to a new Stockholm International Peace Research Institute (SIPRI) study. The figure marks the highest ever revenue for manufacturers recorded by SIPRI as the group credits major conflicts for supplying the large appetite for arms around the world.
“The rise in the total arms revenues of the Top 100 in 2024 was mostly due to overall increases in the arms revenues of companies based in Europe and the United States,” SIPRI said in their report. “There were year-on-year increases in all the geographical areas covered by the ranking apart from Asia and Oceania, which saw a slight decrease, largely as a result of a notable drop in the total arms revenues of Chinese companies.”
Notably, Chinese arms manufacturers saw a large drop in reported revenues, declining 10% from 2023 to 2024, according to SIPRI. Just off China’s shores, Japan’s arms industry saw the largest single year-over-year increase in revenue of all regions measured, jumping 40% from 2023 to 2024.
American companies dominate the top of the list, which measures individual companies’ revenue, with Lockheed Martin taking the top spot with $64,650,000,000 of arms revenue in 2024, according to the report. Raytheon Technologies, Northrop Grumman and BAE Systems follow shortly after in revenue,
The Czechoslovak Group recorded the single largest jump in year-on-year revenue from 2023 to 2024, increasing its haul by 193%, according to SIPRI. The increase is largely driven by their crucial role in supplying arms and ammunition to Ukraine.
The Pentagon contracted one of the group’s subsidiaries in August to build a new ammo plant in the U.S. to replenish artillery shell stockpiles drained by U.S. aid to Ukraine.
“In 2024 the growing demand for military equipment around the world, primarily linked to rising geopolitical tensions, accelerated the increase in total Top 100 arms revenues seen in 2023,” the report reads. “More than three quarters of companies in the Top 100 (77 companies) increased their arms revenues in 2024, with 42 reporting at least double-digit percentage growth.”
-
COVID-192 days agoFDA says COVID shots ‘killed’ at least 10 children, promises new vaccine safeguards
-
Alberta2 days agoNet Zero goal is a fundamental flaw in the Ottawa-Alberta MOU
-
Food2 days agoCanada Still Serves Up Food Dyes The FDA Has Banned
-
COVID-192 days agoThe dangers of mRNA vaccines explained by Dr. John Campbell
-
Addictions2 days agoManitoba Is Doubling Down On A Failed Drug Policy
-
Alberta2 days agoKeynote address of Premier Danielle Smith at 2025 UCP AGM
-
Artificial Intelligence1 day ago‘Trouble in Toyland’ report sounds alarm on AI toys
-
Energy1 day agoCanadians will soon be versed in massive West Coast LPG mega-project


