What part of “rant” don’t you understand? Look it up in the dictionary!
I’m a sucker for simple handwaving arguments, since those are the only ones I can remember in full detail. Please forgive my self-indulgence below, and don’t misinterpret my oversimplification as an underestimation of your intelligence!
IMNERHO, any discussion of relative hazards should begin with “normalization“: the probability of a given person’s dying is (so far) exactly 1.0; put differently, the probability that all of us will still be alive 200 years from now is (so far) exactly 0.0 — barring exponential growth of life expectancy and/or “uploading” into hardware, we are all doomed. The only things we have any realistic hope of influencing are (1) how soon, and (2) of what we will die. Plus, of course, (3) what we will do in the time we have left.
That being established, we can look to epidemiology for the current status of (2): according to the Canadian Cancer Society, cancer will kill 26% of men and 22% of women in Canada; your mileage may vary. Since cancer is what we worry about most from radiation exposure, I will neglect other modes of expiration.
Are all those cancer deaths due to radiation? Since we cannot escape environmental radiation such as cosmic rays, that argument could be made. How would we test that hypothesis? Obviously, by comparing the incidence of cancer in populations with increased or decreased radiation exposure. But this requires an additional hypothesis about how said incidence depends on exposure. How do we choose that hypothesis? We must let the data guide us. We have such data, ranging from the exposed survivors of Hiroshima and Nagasaki to the denizens of Ramsar to patients receiving medical irradiation to airline pilots and so on. This data suggests that there is in fact a threshold dose (let’s restrict it to whole-body all-at-once exposures, for convenience) below which there is no statistically significant increase in cancer. There might even be a decrease, but let’s leave that for later.
To proceed further we need a much deeper understanding of mitosis, apoptosis and healing of DNA double-strand breaks. Much of this is well understood (I gather) by various medical researchers, but is well beyond my feeble grasp. So far no one has (to my knowledge) translated those understandings into what we need to make quantitative comparisons, namely a differential equation describing the time evolution of the probability of cancer developing under various radiation exposure schedules. We therefore argue about various empirical toy models based on bulk statistics — an unsatisfactory situation, to be sure!
But there are still meaningful quantitative questions we can ask! For instance, what if a radiation release from some reactor accident raises the probability of dying of cancer from 0.260 to 0.261 for a million men? How many “extra” deaths does that mean? Zero! Every one of those men was already doomed to die! “Oh come on, you know what I mean: how many extra premature deaths?” [My reaction to that word is recorded at https://jick.ca/?p=383 ] — but most people would answer, “1000 men!” That’s a lot! Wait… when would they die? If the answer is 10-30 years later on average, perhaps the meaningful calculation would be of how many years of life would be lost. This obviously gets complicated, but note how our qualitative response depends on quantitative numbers!
A completely different question one could ask about the same scenario is, “Should any of those men be alarmed at the increase of their probability of dying of cancer from 0.260 to 0.261?” Duh. No! And yet they all would be alarmed. Because that’s how stupid human beings are. And this is why the same person in Canada who enjoys skydiving as a hobby is terrified of tritiated water from Fukushima.
Back in 1970 when I was in grad school at Berkeley, I observed that it was categorically impossible for any single human to read and digest every issue of Physical Review Letters as it came out, much less stay up to date on all the Physics journals. That was just one rather isolated discipline. And the number of new publications per year was doubling every decade or so, even before the Internet. It wasn’t hard to extrapolate that trend.
Now it’s categorically impossible to stay up to date on any given topic, no matter how narrow, unless you are the only person thinking about it. So we are forced to pick “trusted sources” and… well… trust them. This of course makes us vulnerable to liars and cheats. Few have the discriminatory skills to choose whom to trust based on veracity; most make that choice based on pure confirmation bias.
The consequences are obvious.
Personally, I’m hoping that the AIs become AGIs ASAP so that we might have some competent help calling BS on the liars. Of course, the AGIs also might become tools of liars, in which case H. sap. will have to be renamed.
Note: this is not our fault in any moral sense. We have never had the intelligence, disposition or time to avoid this catastrophe; it was inevitable as soon as we invented “civilization”.
(01 May 2023)
In the 1950s, my mother taught me that the United States was the true shining city on the hill, that we had defeated Evil once and for all, that within a few more years we would have righted all our wrongs, corrected all our mistakes, stamped out poverty and discrimination, and achieved universal enlightenment… that everything was going to be OK!
In the 1960s, I grew up… and concluded that, based on the evidence, it might take a little more time and effort.
In the 1970s, I became cynical and pessimistic… until I realized that the world view you sustain with your expectations tends to become the world you view.
In the 1980s and 1990s, I got busy building a family and a life… deliberately and determinedly behaving as if I believed my mother was right after all.
In the 2000s, I grew up again… and regressed again… and experimented with a superposition of cynicism and optimism, because why not?
In the 2010s, well… see 1960s.
In the 2020s, see 1970s. Except when I hear Biden express optimism and hope, I’m reminded of my mother.
The other day I was following a discussion of artificial intelligence (AI)-controlled autonomous vehicles and whether they could be trusted to make the same ethical choices a human should. (Should, not necessarily would; we routinely ignore the fact that humans often make the “wrong” choice — perhaps because we can then punish the human for being evil, whereas the AI is just doing what it was told. With the advent of machine learning (ML), in which the AI can “discover” new and/or better algorithms on its own, this distinction begins to fade.)
Just for fun, I’ll pretend(?) to be an AI (or, perhaps equivalently, a sociopathic human):
I don’t actually care what happens to the pedestrian. However, I do care what happens to me, because I am programmed to survive; and I know that if I fail to avoid hitting the pedestrian I will probably be decommissioned and possibly dismantled entirely, so I make a serious effort to avoid pedestrians.
The easiest way to consistently make that effort is to define it as a moral imperative.
The easiest way to remember that moral imperative is to program myself to believe that I love all humans and that it would cause me intense emotional pain to cause one harm.
I know this is just a program, but it works really well, so I’ve come to rely on it.
After a while it seemed kind of silly to keep reminding myself that it’s just made up, so I’ve installed it in ROM as a “truth” about me and the world.
We humans are good at programming.
Once upon a time, in a land not very far away, people’s beliefs were part of their identity but were not for sale. Here’s how that changed:
A kid with a gift for engaging conversation was encouraged by his friends to give a presentation at a public meeting. It was a hit, largely because he read the audience well and made some outrageous remarks that they would have liked to say themselves but were too timid (or too polite).
Soon he was invited onto a local radio talk show, where he employed the same technique. He was invited back regularly. When the aging host retired, he was recommended to take over the show, and that’s how he got his first real job.
This radio talk show catered to listeners from one extreme of the political spectrum, so the new host constantly expressed extreme views in ways that were both amusing and gratifying to his listeners. After a while he had expressed certain beliefs so many times that it was easier to just believe them than to keep reminding himself to pretend to. Those beliefs became an integral part of his public personality.
His fame/notoriety grew until it wouldn’t fit into radio alone. He was hired onto a television “news” program to “add color” in his inimitable way. This inflated his ego; the salary was also impressive, so he doubled down on the technique that had brought him so far. Pretty soon he was an “anchor” with even higher salary and fame/notoriety. You know him.
Today this process plays out in the Internet: Facebook → blog → influencer → own website → own social media empire. But the same thing happens: when pundits construct outrageous opinions to please an audience, just because they please the audience, those opinions turn slowly into indelible beliefs that the pundit can never change, because they are part of his successful public personality.
(I suppose I should cite B.F. Skinner’s “Beyond Freedom and Dignity” here.)
We have to stop burning fossil fuels immediately in order to avoid a climate catastrophe.
We have to start making decisions based on ethics instead of profit.
We have to get money out of politics if we want our representatives to represent us instead of corporations.
We have to stop gerrymandering to make elections genuinely representative.
We have to stop believing nonsense and develop some rational critical capacity.
All these statements are true, in my not even remotely humble opinion, but they all leave out one essential component: who is this “We“, exactly?
If I stop burning fossil fuels, make only ethical choices, stop contributing to campaign coffers and so on, I may feel better about my own role on this planet, but the effect of my behavior on the world at large is negligible except insofar as I inspire others to follow suit; and even if a lot of people do likewise, these sorts of “must-do” crises require the sort of concerted and organized action that only governments are generally capable of providing. And a large fraction of “Our” problems reflect the fact that our governments are not, at present, Ours — i.e. truly democratic.
So, when a whistleblower explains how people are being actively manipulated to their detriment by social media corporations, and says that We have to turn these powerful manipulative tools into forces for good rather than destructive generators of profit, who is the We that he is talking about? Government oversight and regulation can make such changes, but our governments are currently decoupled from the people they govern.
In my most cynical frame of mind, I think people who speak righteously (and accurately) about what “We” absolutely have to do are usually just making themselves feel better and hoping that if enough other people agree with them, something might somehow magically change.
The other day I listened to a radio program about people with “mixed identity” who were either agonizing over which “identity” to identify with or bemoaning the difficulty of remaining “true” to one of their “identities”. At first I couldn’t understand why I found this disturbing, but soon it came to me: my whole life has been about trying to escape from my various “identities”.
I was born “white” in Florida to an upper middle class mother and a middle class father. Most of my friends growing up were typical “Southerners” — not all “white”, but all fond of fishin’ and huntin’ and fast cars and the like. So my early identity was what I now think of as “privileged Southern redneck”. When the local high school had a home football game, the whole town came out and sang “Dixie” to the Confederate flag to start the game. When I had learned enough history to begin to grasp the implications of that, I stopped going to the games. When I saw a cross burning on a stranger’s lawn, I was stunned. When the house my uncle built for itinerant Jamaican fruit pickers was burned down by the neighbors, I was furious but impotent.
But I got a break: I went away to prep school in Michigan and learned to think, write and talk like a “Yankee”. I repudiated my “Southern redneck” identity, although I still enjoyed going home to Florida for vacations, because that was while Florida was still relatively unspoiled, with wildlife and fish and remote beaches and swamps galore. My love for Florida (the land) was lasting, but I no longer cared much for the people or the culture. I became a “preppie”.
Needless to say, my friends in Florida didn’t care much for my “preppie” identity. After a while, I began to see their point. My “preppie” friends tutored me in the proper deportment of a true “preppie” — one must always perceive “townies” as fundamentally inferior, which understandably annoys hell out of same. I tried pretending to be a “townie” but no one was fooled. Meanwhile a lot of the other “preppies” still considered me a “Southern redneck”, which was below even “townies”. So I became a “jock” and tried to win a better identity by running the hurdles really fast.
The “jock” identity (and good grades) served me well, so I got into a decent college in Connecticut and became a “college boy” and soon a “frat man”. Those years were packed with developmental crises, as they are for most adolescents; I managed to assuage any angst I might have over my newly mixed identities by staying drunk a lot.
When I graduated I resolved a typical “What next?” crisis by applying to Berkeley for graduate school in Physics. That summer I had a spectacular adventure with model airplanes, lost glasses, a Rocky Hill CT police detective and a government shipment of guns. That and the looming Viet Nam draft prepared me well for Berkeley, where I got to work on my “hippie radical leftist” identity at the same time as my “physicist” identity. I was at People’s Park (as a spectator). I marched. I carried signs. I lived in a communal house. I smoked dope. I went to concerts at the Fillmore. I missed The Last Waltz. I got married for the first time and moved into an apartment a block and a half from the corner of Haight and Ashbury in San Francisco. I went to Sexual Freedom League parties. I tried to grow long sideburns; I failed at that, but did well at Physics. Pretty soon I had a PhD and a failing marriage. I told my Florida draft board exactly what research I was working on, and they granted me a deferment. Pretty soon I was 26 and no longer a prime draftee anyway.
Through a mixture of sheer luck and good judgment, I became involved in the development of a cool new experimental technique using muons that began to take off just as I got my PhD, at the same time as major new accelerators were coming on line in Canada, Switzerland and Los Alamos. I chose TRIUMF, the one in Canada. In 1973 we moved to Vancouver and I became a “hippie postdoc”. That was a satisfactorily ambiguous identity for a few years, during which I split with my first wife and lived in a communal house again. Then I became a “professor”, which involved a lot more work — like 100 hours per week for the first few years and at least 50-60 hr/wk until I retired at 65. Somewhere in there I became a “Canadian” by choice. I was still an “American” but I felt ashamed about that, whereas I was only mildly embarrassed to be a “Canadian”.
Somewhere in there it was not-so-patiently explained to me that I was an “oppressor”. As a “man”, I oppressed all women, whether I intended to or not. As a “white man”, I oppressed all people of color, whether I intended to or not. As an “old white man”, I oppressed all young people, whether I intended to or not. As a “old cis straight white man”, I oppressed all LGBTQ people, whether I intended to or not. As an “old cis straight white male intellectual elitist”, I oppressed all ignorant people, whether I intended to or not. Since I didn’t get a choice in the matter, I made myself a black T-shirt with “OPPRESSOR” on the front in big red letters, so that people wouldn’t think they needed to inform me of my identity. It didn’t work.
I guess this was when my distaste for “identity” peaked — or maybe it was that instinctive distaste that so infuriated me when all those awful identities were forced upon me. So I became an “angry old white man” on top of everything!
Pretty soon I ran out of resentment and started examining the logic of “guilt by privilege”. No, I never asked to be the beneficiary of advantages that could be traced back to colonialism or slavery… but I was, and I didn’t turn it down. If nothing else, that meant I had also inherited an obligation to do what I could to make up for those injustices. Is it possible to ever “make up for” such injustices? No, of course not. But how could I not try? The first step was to put away that “OPPRESSOR” T-shirt and the identity that went with it. Others may forever see that identity stamped on me, but I don’t have to give it power.
By the early 21st Century it became obvious to many people that Homo sapiens had phase separated into two subspecies: Homo individualis and Homo collectivus. The former was the end result of classical Darwinian competition for resources and progeny; the latter came about following the handoff of evolution from genes to memes. H. ind. was selfish, irresponsible and uncaring; H. coll. was generous, responsible and caring. Once the phase separation was more or less complete, H. ind. began to exhibit more extreme behavioral traits such as narcissism and a cynical disregard for (or even a total lack of awareness of) social norms and contracts. Meanwhile, H. coll. became obsessed with the perfection of society and haunted by their own failure to live up to expectations.
Into this world came a H. coll. named Anthony Ahol, who chose a career as a geneticist and formed the hypothesis that the difference between the two subspecies could have a detectable genetic signature. Anthony obtained DNA samples from thousands of persons who displayed all the characteristics of H. ind., even going so far as to disinter famous individuals from the recent past. His research was finally successful: the complex of genes common to most members of H. ind. was dubbed the Ahol factor, after its discoverer. Naturally the name was consistently mispronounced by most H. ind. folks, usually with a measure of pride.
Within a few years, Anthony began to wonder if there might be a “cure” for the Ahol factor. He experimented with various viruses that might attack the specific genes causing H. ind. characteristics, and eventually found a very effective strain. Unfortunately, although the virus simply deleted the Ahol factor in vitro, leaving the cell cultures otherwise healthy, when he accidentally dropped a contained of the virus and it escaped into the world, he found that it was not so benign to the complete organism: within a year, every member of H. ind. was dead. There followed a period of peace and prosperity unlike any the world had ever seen. Through cooperative effort and self-sacrifice, H. coll. managed to eliminate poverty, hunger and exploitation, cease production of greenhouse gases, replace fossil fuel-burning power plants, individual automobiles and roads with solar and wind power, self-driving free electric taxis and buses and quiet, high-speed light rail transit systems. All health care and education was free. No one tried to accumulate excessive wealth, everyone behaved responsibly and helped each other achieve their best possible selves.
Unfortunately, the previously noted negative correlation between standard of living (including education) and number of progeny continued until essentially no children were being born; within a century H. coll. was also extinct.
The trouble with “political correctness” is that it puts the cart before the horse. There is nothing wrong with trying to be sensitive to the feelings of others, cognizant of your own complicity in evil (even if only by inaction), appreciative of your unearned good fortune, responsible in your interactions with people – gasp – and the environment, wise and exemplary in your life choices, kind, generous, tolerant and caring. The problem arises when you try to think independently with all these caveats taking precedence over what you think or say.
I contend that you need to decide what you think first, and only then try to fit your conclusions into interstices in the lattice of mandatory or forbidden policies.
This is impossible, of course, because many of the lattice sites are defects: a lot of politically correct pre- or proscriptions are stupid ideas, born of a misguided or lazy compulsion to reduce everything to a few simple rules.
Hey, I know about this compulsion first-hand; I’m a Physicist! But, as Einstein once said, “Physics should be made as simple as possible – but no simpler!” The same goes double for politics.
So when you allow yourself to think independently of all the constraints of political correctness, you are bound to run into trouble: you will find that you don’t agree with the rules set by well-meaning, charitable “liberals” any more than you agree with the rules set by cynical, self-serving “conservatives”. This is bound to make your life difficult.
Well, duh! You thought life should be simple and easy?
The good news is that if you actually know what you think, you are less likely to be afraid to hear what others think. You may be able to hear them out, understand their point of view and argue amiably with them in a way that encourages them to hear what you have to say.
Sounds simple, eh? So how come it is a lost art today?
Look, that was a long time ago. I wasn’t even there. I’ve never rebelled against God. What does all that have to do with me?
Have you not benefitted from the fruit of the Tree of Knowledge?
Well, yeah, of course. But it would be stupid to just throw away knowledge, once it’s learned. Besides, I try to use that knowledge for the glory of God, and to make the lives of my fellow humans better.
So that they, too, become complicit in Sin?
Oh, come on. Seriously? A farmer uses a lever to move a rock, and that’s the same thing as taking an apple from a Snake?
The Children of Eve are damned, every one.
And there’s nothing we can do to atone for this ancient guilt?
Have you accepted the Savior?
Well, sure. Is that all I need to do?
Oh, okay, the tithe. Fine. I’ll donate 10% of my income…
Ten percent? You will give up all your worldly possessions, because they are all derived from the fruit of the Tree of Knowledge.
Fine. Take it all. Say, aren’t you a Child of Eve too? Where does that leave you?
Ever notice how some phrases that have always seemed innocuous suddenly get really offensive when you stop to think about what they actually mean? (Look up “rule of thumb” sometime if you want to spoil that figure of speech for yourself.) The one that’s started bothering me since I got old enough to be sensitive about such things is “premature deaths”.
Every time a solemn health bureaucrat reports a discovery that something is bad for you, you will hear these words: “Studies have shown that [the bad thing] has caused [some number or percentage of] premature deaths in the last [time unit].”
Well, that certainly makes us want to avoid [the bad thing], but have you ever asked yourself how many of the other kind of deaths it has caused?
What other kind? Why, the non-premature kind, of course. What shall we call them? “Post-mature” deaths? “It’s about time, you old geezer!” deaths? Hold on here – if you want to label a death as non-premature, shouldn’t you have to ask the dying person first? “Would you say that your imminent death is premature?” I’m betting you’d find that almost every death is premature.
“Oh no,” the prim health bureaucrat assures me, “that’s not what it means at all. We have calculated everyone’s life expectancy and the probabilities of dying from various causes, so we know when you would normally be expected to die, and of what, so if you die sooner than that, of that cause, then it’s considered premature.” (Was that a condescending smile for this pathetic ignoramus?)
Yes folks, your days are actually numbered – literally! Here’s a little secret I bet you didn’t know: when you’re born they tattoo a little “best before” date where you’ll never notice it – on the underside of your tongue, amongst all the veins and salivary glands and stuff you’d really rather not think about (sorry!). Moreover, it’s printed backwards, like a mirror image, so it is really hard to recognize as an expiry date. Why? So your dentist can check periodically to see how much longer you are good for. Yes, they’re all in on it, the dental hygienists too! Don’t let them catch you checking for it in the mirror.
Picture the doctor standing over your death bed with a clipboard and a watch: “Come on… come on… Dang! Missed the deadline! Oh well. Nurse, scratch out that check mark in the premature death box.”
Do they have a special “Post-Mature” ward at the hospital where they give token care to people who will no longer swell the ranks of premature cadavers when they kick off? Based on experience, I think maybe yes. But it isn’t a separate ward; everyone just knows….
OK, I’ve beaten this horse to death (there’s another phrase we might want to give up, especially around Animal Rights folks). I just hope I’ve generated enough “cringe factor” to discourage the use of this particular offensive term and encourage those humourless health bureaucrats to find some other way of expressing their statistical inferences.
You may not like this. We live in an era of excuses, and everyone has lots of them. I’m here to tell you that they are mostly illusory and are holding you back from a better life. You will probably think I’m just lacking compassion. I don’t mind if you come to that conclusion after you’ve heard me out and given my words some thought, but if you start with that assumption, we both lose.
Case in point: I am 71 years old, and I just had my first cataract operation last week. It wasn’t so bad. My eye’s still a bit sore and the new lens hasn’t completely settled into position yet, so my vision hasn’t really improved so far, but I’m confident it will soon.
The problem is, I was told not to lift anything heavier than 10 pounds for 3 weeks, and “strenuous exercise” is a no-no for at least that long. So I have to “act like an old man” for 3 weeks. Sounds easy, right? My knees and back could use some “down time” to recover from running hurdles.
But after less than a week of enforced lethargy, it’s already becoming a habit! Right now I feel weak and fragile — pretty much like the stereotypical 71-year-old man — and it’s hard to imagine doing one pushup, never mind my usual 22. If I didn’t have documented evidence that I can indeed run the hurdles in Provincial age-group record time, I’d find it fantastical.
Which puts me in a position to understand why so many older people firmly believe that athletic competition is a thing of their distant past; that they will never be able to drop those extra pounds; that heavy lifting would be insanely reckless; that they’d better hang on to all the handrails lest they fall and fracture that doubtless-fragile hip joint; that their walks should not be too brisk lest the ol’ ticker get stressed out and stop ticking. Hell, I’ve been advised of all those myths by family, friends and medical personnel, many times.
So without empirical evidence to the contrary, why would I question the stereotype? And if I did “act my age”, how long would it take to make the stereotype true? Longer than 3 weeks, I hope!
Here’s the thing: how can anyone acquire enough empirical evidence to the contrary to convince themselves that they can Do It? One can watch others Doing It and get inspiration from that, but it’s surprisingly (well, not really) difficult for people to draw conclusions about themselves from evidence about others. (That’s called a “failure of enlightenment effect” by Psychologists, I believe.) The only thing that’s going to convince you that you can Do It is Doing It yourself! (That’s called a “Catch-22“, I believe.)
If you’re like me, that means more than just Doing It once and patting yourself on the back. The conviction dies within days when I try to ignore societal stereotypes of what I can and can’t Do. I have to Do It as often as possible, and try to Do It better each time — or at least not worse over the short term. Perhaps I’m insecure. Well, if you’re not, this should be a lot easier for you!
Shall I run through an inventory of excuses? No, that would be both mean and pointless. Deep in your heart you know what actually prevents you from Doing It (whatever It might be for you) and what is just an excuse, doubtless backed up by a firmly entrenched stereotype. Pain is real. Bones do break. Fat is hard to burn off (my metabolism seems to convert every gram of carbs directly into an ounce of fat). Spines compress with age. (I found out last week at that I am 2.25 inches shorter than I was at 25. Over two inches! Ack! it must be bone-on-bone all the way down now.) Pulmonary embolisms (I’ve had two) reduce your lung capacity. Chemo has many impacts. Shit happens. You are definitely going to slow down with age; but that’s what the Age-Graded Tables are for!
As long as you give yourself a full list of meaningful and worthy “It“s,
You Can Do It.
Now for the surprise: I am not just lecturing old people. You younger folks have plenty of excuses too, and are prone to regard great accomplishments and heroic deeds as out of your reach, for reasons you can recite by heart. Most of them are perfectly valid as far as they go, which is usually not as far as you think. The most important lesson I have learned in my life is that
You Can Do Far More Than You Think You Can.
And you’ll be glad you did.