Tyler Cowen and Johnathan Bi on AI & Humanity
Source: https://www.youtube.com/watch?v=t6Je8EKhUyw
Tyler is one of my favourite thinkers (I was really impressed with Johnathan Bi as well - have subscribed to their Substack). Below is what I have taken from their interview. Its spans a wide range of topics - the one's that are personally interesting to me are highlighted below
Change (regardless of the type) will be hard and its coming
I think psychosocially it's very disturbing that you can no longer tell people what kind of world they should prepare their kids or grandkids for [...] just digesting that fact, and getting over it...we're not ready for it, for real change.
He then goes on to talk about how even the most positive scenario of the AI revolution may be the hardest for us, because if there are challenges and issues, then resolving those gives us a sense of purpose and meaning. A perfect scenario may be the most mentally challenging for us.
Tyler emphasizes that we are not ready for this change in part because of the great stagnation from the mid 1970s-2020s - we will find transformative change harder because we have lived through such relative stability during this time.
One antidote Tyler provides is to invest in good friends, peers, mentors and cheap hobbies. This is something I neglect (specifically peers and mentors), and I need to reprioritize.
"Hot takes" are copes
There are a lot of interesting examples of copes they talk about, from either ends of the AI-capability spectrum and the AI-doomerism spectrum. The takeaway message for me is that because nobody can predict what is going to happen, all hot takes are copes in some respect, regardless of their position on AI. This is the first time I have heard it put that AI doomerism is a cope - saying "AI will kill us all and there's nothing we can do about it" actually saves you from having to seriously engage with AI with nuance.
Additionally, I found it refreshing they called out a "perverse psychology" where the builders of AI claim in 2 years there will be nothing left to do but meditate. This grandiosity of "exaggerating the risk to make themselves feel special" isn't necessarily a cope, but is related in that it transforms potential guilt into a form of status or importance. You could argue that feeling of status/importance has the same roots as the more common cope that AI can never replicate the superior magic of human intellect.
Its almost as if they are hedging their bets that regardless of how AI turns out, they have positioned themselves somewhere in the AI history story, preserving their legacy (no press is bad press).
Confession of sin is to claim credit for the wrong
He posits that copes are just as much about one's lack of desire to learn a new thing as it is our inability to deal with radical change. I suspect learning a new thing spans from all the way to learning how to get meaning in life when everything you've studied and worked for is no longer of high value to society.
Seeing the world as it once was before its lost
AI raises the value of travel to other places, to very different places... places that are less saturated by AI.
Of course its a luxury (not just money, but time - thinking about other parents of young children) to be able to do this, so Tyler does refer that there are "nearby versions" of this - something, anything to get away from AI to ground yourself before these things become more
He talks about how visiting places like rural Mexico or Loas will gain in popularity as people look to escape the "modern world" and I agree - this is what I would do if money we no obstacle. It also brings a level of fear for those communities being able to preserve their way of life in the face of over-tourism, but thats a topic for another time.
Staying healthy out of curiosity for the future
I found this take refreshing, since its an antidote in some respects to my default doomerism - Tyler has made the very conscious decision to keep healthy so that he may see the outcome of this transformation out of curiosity.
Writing Uniquely for Humans and AI
This was the most impactful section for me personally. In essence, Tyler has changed my perspective that it is essential to create a brand online. This is something I personally hate (I haven't got to the bottom of why I hate it so much), but I need to write for humans (In a way that shows authenticity and personality, thigns that AI can't replicate) and for AIs (so that I may shape their outlook in some small manner).
Writing for AIs in a public manor is actually beneficial as it preserves your legacy - this feeds into our innate obsession with legacy (we can't fathom being lost to history, even though this happens to everyone - even Henry VIII is just a name in history who did things, there's no understanding of and relationship to him as a person). However despite me recognising that striving for a legacy is superficial and hollow, I will try it anyway.
As for writing for humans, there are no surprises here. You need to inject personality, be more subjective, almost "like a memoir".
Tyler comments that blogs are "dead" and mentions how AI can be a better writer than most - this may be true, but I have no other medium at this point, so we are starting here. He mentions that in-person and video will retain value the longest.
Additional Material
Some related quotes from Tyler's blog mentioned in this video that really resonate with me:
Hardly anyone you know, including yourself, is prepared to live in actual “moving” history. It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view the good will considerably outweigh the bad [...], but I do understand that the absolute quantity of the bad disruptions will be high.
No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.
The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring [...] no one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes [...] not you, not Eliezer, not Sam Altman, and not your next door neighbor.