go upright and vital and speak the rude truth in all ways

OG rude truth

Saturday, March 14, 2026

When will Christian Transhumanism get political?

When the Christian Transhumanism Association first became organized, I was invited to lurk on the fringes and even, for a brief period of time and very nominally, invited to be a theological presence in a kind of consultatory capacity. From the beginning, I was ambivalent about this, because I have always been very deeply unimpressed with transhumanism as a cultural and pseudo-philosophical movement, and nothing that has transpired since I first began researching transhumanism in earnest in about 2006 has shifted this impression. 

In 2012, I wrote that "democratic transhumanism," aka "technoprogressivism," seemed to me to be the least objectionable formulation of these ideas--at least, this group seemed to be attempting to hold on to democratic norms of distributive justice and the like. But even then, this generosity was less a matter of affirmation than an attempt at hopeful construction: an invitation, to the H+ corner that might be most receptive to it, to be serious and less insular and join other kinds of thinkers on a joint project to imagine a better world for everyone and to think through the impossiblities and injustices insisted upon by their more libertarian (and increasingly over time, frankly authoritarian) counterparts. These folks were the most willing to dialogue with religious thinkers and theologians, and it seemed like we might build enough common ground to be useful interlocutors for each other. And this seemed like the branch of transhumanism that the emerging CTA was most in dialogue with. 

So, when the CTA first organized, I was there at its inaugural conference, hosted at Lipscomb University in Nashville, TN, one of the flagship educational institutions of the Churches of Christ. I even chaired a panel as a favor, since I was there anyway for the Christian Scholar's Conference that same week--the weirdest and wildest moderating I've ever been tasked with.

Since then, I've been a name on the roster in the Facebook group although the number of times I've participated in a comment thread can't be more than half a dozen. I've been periodically tempted to leave over the years, but every time I've thought, well, this might be useful for research purposes?, and I've ended up deciding to continue to lurk on the fringe. In the past couple of years, I have seen an increase in posts there of the most wide-eyed, credulous sort: posts about ChatGPT theologizing (whuuuuuutttttttf) and all sorts of I-just-read-this-unvetted-hype-can-you-believe-what's-just-around-the-corner posts. There's very little actual theological work happening, and when there is, it seems to be sui generis blog posts, as if no previous serious theological work on transhumanism/posthumanism exists or can be found. (And there is plenty! Including, of course, mine. I mean, not to be petty, but as this is my blog I might as well acknowledge how very special that particular feeling of erasure is.)

All of this to say: I have always been dubious about the very idea of Christian Transhumanism, and ambivalent at best about the CTA as an organization.

But in the last year or so, I have become increasingly alarmed, not at the routine silliness of the posts and discussions in this space, although these continue to frustrate and baffle me, but at the absolute refusal of the CTA to say anything at all--anything at all--about the implications of the emergence of tech and transhumanism as political players on the side of the undeniably authoritarian Christotechnofascist Trump government. (Well, "government.") Not a single post. Nothing.

Nothing about Elon Musk's interference in the election with his giant pots of money and crazy stage stunts; nothing about his political downfall, meltdowns, racist chatbot, unethical business practices and repeated lies; nothing about Peter Thiel's influence over various members of the Trump entourage, including his orchestration of Vance's career (well, "career") and putative religious conversion, or Thiel's very weird "theology" lectures; nothing about the gleeful, reckless stuffing of "AI" into various government agencies as the replacement for all the people unlawfully fired by Musk and the DOGE-bros; nothing about the role of Palantir and surveillance tech; nothing about the gutting of scientific research, the insertion of literally anti-science kooks into leadership, or the coercion of universities on false pretences.

Nothing, nothing, nothing, nothing, nothing about the H+ and H+ adjacent Tech Bros and their very public shift of political allegiances to far-right authoritarian extremism, for the simple reason that it works out just great for them.

So, guys. Look, this is a moment that calls for real theology. The kind of theology that produced principled dissent in the past, and resulted in documents like the Barmen Declaration. It calls for self-examination, and repentance. It calls for taking a good hard look at what has actually become your driving motivation, and therefore the determiner of who you're willing to ally with, what projects you're taking on, and how, and why. And if what really matters to you is some batshit H+ fantasy of life extension and neato gadgets--and that to get at that, you're willing to sit tight and keep silent while thousands of people are currently dying and thousands more will, because you traded your trust in actual science for self-serving H+ bullshit--well, then, Christian Transhumanism was just an early adopter of the whole Christotechnofascism thing all along, I guess, and I really misplaced my early ambivalent generosity.

So, I'm waiting. What will it take? One more war? One more murdered protestor? One more story of Palantir or Anthropic or Open AI aided government surveillance and kidnapping and concentration camp deaths? Will it take the installation of a Chat-GPT Trump as Forever President after Our Dear Leader finally shuffles off his mortal coil to ring the alarm bell that these guys are not the Good Guys? Is there anything that would prompt a public theologically and ethically grounded repudiation of these technologies that are obviously, explicitly, intentionally causing harm in order consolidate power and profit, from the CTA?

...?

 

the only way to follow Jesus is post-Christian

Look, I know there's a brand of spiritual-but-not-religious anti-institutional naive youthful rebelliousness out there, the kind that commits to an unexamined universalizing implicit assumption that "Religion" is to blame for ruining pure spirituality and therefore the only thing to do is to be bravely, individually, uniquely and eclectically spiritual. Build your own system of spiritual beliefs that work for you from the bits of your World Religions class that you happened to pay attention to, yadayada. Crystals, a bit of Stoicism, a favorite Hindu deity, meditation, Jesus is Just All Right, I don't pray but I believe in manifesting. 

That's not what I mean. I mean something like the Tillichian death of religious symbols is happening with the word "Christian." And this isn't a slow death of meaning from lack of use--the kind of thing (if I remember right) Tillich was originally positing. If anything, it's a death of meaning from semantic abuse and overuse. I think the label "Christian" is dead because it's been actively crucified by people who would, had they been there, happily crucified Jesus--that upstart subversive bleeding heart do-gooder with as much interest in upholding the reigning power structure as he had in directly politically disrupting it--and what these folks mean when they embrace the word "Christian" is a shambling zombie corpse of the religion they are daily crucifying.

So I don't think we need to take on the task of rehabilitating this word. Let it die. What do we need it for? Jesus wasn't a Christian. If what you're after is following that guy, what's this word got to do with it?

I am aware that this is complicated, that there are institutions affiliated with the word "Christian" in various ways that are engaging in active resistance to the criminal injustices of the Christotechnofascists who are currently murdering and kidnapping people in the streets, who are illegally perpetrating wars around the globe, who are plotting in plain sight to remain in power in ways that are "democratic" in the same sense that these guys are "Christian." And we need those institutions to resist, and to resist as institutions, not just collections of individuals. 

Even so: the project is not rehabbing the label. Let "Christianity" become a term of contempt. Let it be radioactively unusable for the next century, for perpetuity. Let's stop worrying about whether or not we can be "Christians" or whether we have to be content to continually post-pend it with "-but-not-that-kind-of-Christian." Let's accept the death of this symbol and move on because we've got better things to do than defend this word. Let the fascists have it. Let's bury them with it. 

Friday, February 27, 2026

our best defense against AI bullshit is engaged pedagogy

For years now, every semester, regardless of which class I'm teaching, I spend the first couple of class periods asking the students to collaborate together on syllabus policy for attendance, late work, and overall assignment weighting for final grade calculation. I've done this with intro-level Philosophy, First Year Seminar, gen ed courses, upper-level courses--doesn't matter. I love it. The immediate benefits are: 

  1. students actually know what the policies are and don't pretend ignorance throughout the semester as their first excuse and get-out-of-jail-free card;
  2. students generally don't ask me to just ignore or break our policy when they run afoul of it;
  3. the policies themselves are better: clear, comprehensive, student-friendly, incentive-based rather than punitive, and oriented toward achieving the real objective;
  4. the meta-understanding produced by forcing them to identify the real objective (learning together) and work to articulate a practical policy applications to achieve this (once they understand how hard it is to make these decisions, they respect the whole idea of a syllabus a whole lot more); 
  5. this is an intentional invitation into scholarly community and a gesture of respect for their agency;
  6. they have to talk to each other, learn names, and work together on a common project for mutual benefit literally from Day 1.

I run this by putting students into small groups and giving them about 15 minutes to craft a policy proposal. It works best if you ask them to start with the question, "what is this for/what are we trying to achieve?" For example, attendance: what do we really want? We want everyone to come to class. What kind of policy will achieve this? Students are often stuck in the default "three excused absences and then x% of your final grade" mentality. First round results yield a lot of unimaginative variations on this kind of scheme. It is a great pleasure to listen to these and then say, "Nope. Not doing it. Not doing anything like that. Here's why: this doesn't achieve the objective. Are these policies going to motivate you to come to class? Hell no. We can do better. Back to the drawing board, get creative." Over the years, students have come up with all kinds of effective incentives: stickers, favorite junk food prizes, bonus points on the final grade for "streaks" of perfect attendance, possible exemption from the final exam for perfect attendance, bonus points for days of collective 100% attendance. These work because they are identifying actual motivators and along the way, getting the message that attendance is a prerequisite for real learning.

When I first started this, I really thought that the increase in attendance and reduction in student complaints/weaseling attempts were the main benefits--and honestly, that would be enough to keep me doing it. But the longer I've done this, the more important the last two items in my list above have begun to seem.

This semester, I decided to add another day for this process--this means we spent Week 1 (Thursday) and all of Week 2 (Tuesday and Thursday) on syllabus negotiation--and, because it's Women in Philosophy, I combined this with an assigned reading for Week 2, the Intro and Chapter 1 of bell hooks' Teaching to Transgress. Their regular Reading Response and in-class Written Reflection for this reading asked them to connect the dots between our first two weeks and the engaged pedagogy hooks describes.

The rest of the course is designed around providing maximum engagement and agency for the students. We begin Tuesdays with 15 minutes of small group discussion on the reading, which I do not monitor or participate in and is graded solely through periodic self- and peer-evaluation throughout the semester. They decided on the weighting for this component of their final grade. Their Term Project is wide open: who do you want to learn about? what sort of end result do you want to produce and share with the rest of us? do you want to team up or go it alone? We'll have a week of shared learning about various figures not already featured in our syllabus, that includes a participatory game show, two short creative film projects, a video essay, a professional Business presentation, a mood board, a scrapbook, ...you get the idea. 

For the first time in about 3 years, I have zero anxiety about students cutting corners with AI.

These students didn't just sign up for the class, they have signed on. They're determined to get that 100% attendance because the bonus that motivates them is exemption from the final exam. They're excited about the Term Project because they designed it, and they want to do it, which means they won't be trying to cut the corners with a bullshit machine that undermines their learning process. They do the reading because they're curious, and because they're accountable to each other for showing up prepared for discussion. 

We can try to "AI-proof" our syllabus and assignments with in-class writing and Blue Books, and while that's necessary, I think, it fails to address the really pernicious problem that the advent of the Bullshit Machine has exacerbated. The really pernicious problem is the default adversarial starting point of the classroom. We already existed in a not-great transactional default setting--students think they're in college to get a degree in order to get a job in order to maintain an existential illustion of socioeconomic security. Faculty put up with it because there's no other way to do the bits of our jobs that we do enjoy (research, teaching the handful of students who do actually want to learn). But the Bullshit Machine moves us from default transactional to default adversarial. Students are looking for ways to get away with it all right, and faculty are on high alert to prevent them. We're on opposing teams from the start, and the mutual suspicion forecloses any possibility of real learning happening. Because learning is collaborative.

So we can't solve this through syllabus policy, or assignment design. We have to solve the real problem, the perception that the classroom is a bullshit contest to see who can get away with what and who can catch who's trying to get away with it. 

We have to learn how to trust our students to do the work, and students have to learn to trust their professor is asking them to do worthwhile work, and learn to trust themselves that they can in fact do it. And the only way to do that is make it as plain as possible that the work is the point, and that there's only one team in the classroom and we're all on it together.

That's what the syllabus policy negotiation does better than anything else, and making the theoretical underpinning of it explicit this semester cemented this for Women in Philosophy in a way I've not seen before. Would it work as well in a gen-ed class? Maybe not; but it can't hurt, and I think every classroom ought to be sending the message that we are taking the work seriously, taking students seriously, and intend to accomplish something worthwhile in our time together. 


Thursday, January 22, 2026

pedagogy in the unfortunate age of so-called AI

 Maybe it's impossible to truly "AI proof" a course, but that's not gonna stop a girl from trying.

I don't like quizzes as accountability mechanisms, so in upper-level courses I use reading responses: questions, quote, commentary is my format. Questions related to the reading, a quote with a page number cited, and commentary capturing the initial thoughts in response to the reading. I told them I wanted it to pull back the veil on their brain-workings, and that's it. No polish. That their audience is not me, but their own selves, and ffs not to try to be impressive. I want this to be RAW. UGLY. BRUTAL. USEFUL. REAL.

So Day 1 of my upper-level elective Women in Philosophy course I showed the students an example of What Dr. Thweatt Does Not Want in a Reading Response.

What Dr. Thweatt Does Not Want in a Reading Response is a ChatGPT generated document following my exact instructions for Reading Responses. ChatGPT flawlessly followed the structure of the assignment, provided multiple bullshit fake-performative non-questions, a quote with a fake citation, and a bullsit commentary written in the first person.(As a bonus, at the end of generating this for me, it offered to write a full essay. That's SO GROSS.)

Anyhow, when I put this on the screen as an example of What Dr. Thweatt Does Not Want, I asked my students what was wrong with it.

First comment: "it seems like these questions...aren't real questions. Like, they sound like they're based on the text but it's not like they are questions someone would actually be asking, if they were trying to understand the reading"

Second comment: "This feels perfomative"

Good. GOOD. Yes, yes, my chickens, you are preceiving correctly. 

So then I said, that's right. These aren't real questions. When you try to answer them, you can tell that there's not an actual question there, because there's nothing you can actually say in reply. It's just words strung together in grammatically correct ways with a question mark at the end. Then I said, the other thing, of course, that is wrong with this, is that this is ChatGPT.

And 5 students immediately said in unison, "I knew it!" And I said, that's right. You did know it, because this stuff stinks. It stinks because it's bullshit, and we're all pretty good bullshit detectors. YOU KNOW IT. AND I KNOW IT. AND IT'S NOT HARD TO SEE IT. SO DON'T DO THIS.

Just don't do it. It's not what I'm asking for. ChatGPT can't do what I'm asking for. And I can spot the difference. So let's just establish this as baseline on Day 1, okay?

We'll see if this helps? But it's also true that these students are taking this as an elective--no one has to be in here--and it's mostly upper-level students who are pretty highly motivated. So this skews the results, proabably. But I'm (temporarily) heartened by the attitudes observed here--serious students, the ones who are actually interested in learning new skills and material and critical thinking, grasp that the bullshit-machine shortcut leads nowhere and they are not impressed. 

So: some of the kids, at least, are all right.


Saturday, January 10, 2026

Obedience, the primary civic virtue under fascism.

When we walk with the Right
In the light of their might,
What a Glorious Gain they repay! 
While We do what They Will,
They Protect Us all still,
All of Us who will Trust and Obey.

Trust and Obey! For there's no other way
to live in this A-Mur'-ca, but to Trust and Obey.

Not a Burden We bear!
Not a Sorrow We share!
Those with Problems, must quickly away!
Not a Grief or a Loss,
Not a Frown or a Cross,
Not for Us! We will Trust and Obey!

Trust and Obey! For there's no other way
to live in this A-Mur'-ca, but to Trust and Obey.

But We never can Prove
Those We Blame should Remove--
Yet We cannot afford to Delay;
For the favor He shows
For the Blessings bestowed,
Are for Us, Who will Trust and Obey.

Trust and Obey! For there's no other way
to live in this A-Mur'-ca, but to Trust and Obey.

And Enthralled, ever sweet,
We will sit at His feet,
Or We'll Walk by His side All the Way;
What He says We will do,
Where He sends We will go;
Never question! Only Trust and Obey!

Trust and Obey! For there's no other way
to live in this A-Mur'-ca, but to Trust and Obey.


Original "Trust and Obey" lyrics by John H. Sammis. The above parody is mine. 

Tuesday, December 9, 2025

default stance: believe women

That's it. That's the whole post.

Just start with, believing women. Go from there. 

Thursday, November 6, 2025

do you think other people are real

When Techbro asks Papa-Techbro this question in Mountainhead, it's played for laughs, as if we should receive this line as absurd caricature, like no one, not even Elon Musk, is really that delusional, I mean, can't be, right?

I think we ought to start asking people this question, regularly, in the increasingly unmoored conversations around "AI" carried on by befuddled people who don't understand what it is they're talking about but nonetheless want to opine about "inevitable futures" and "interesting possibilities." 

This is the way to cut through the bullshit and get right to the problem, when talking to someone who wants to claim that ChatGPT is a perfectly decent way to cope with loneliness, or a great solution to a shortage of available therapists, or that we can't tell people it's impossible to fall in love with the chatbot they named Erika and trained to sext.

Really? In love with Erika the sext-bot?

So there's not any difference at all between falling in love with a genuinely-different-from-you other, whose reciprocation of your desire for relationship and mutual understanding is wholly voluntary, and the funhouse-techno-mirror of your own masturbatory ego-needs? 

So do you think other people are real, at all, or...?