My Brilliant Friend and Me
I paid $20 for ChatGPT Plus.
I asked ChatGPT to tell me how many books are in existence and it said there are approximately 130 million unique books in the world. The chatbot cited “Google’s attempt to catalog all known books globally, including both modern publications and historical works.” Cool if true.
Next, I wanted an average of how many books a human reads in a year—on average, how many books a year does a human read?—and it gave me a few scenarios:
Global Average (6 to 12),
United States (12 with a median of 4),
European Countries (6 to 10) and
Avid Reader (50 to 100).
I don’t know why it atomized its answer that way. America is number one but maybe not after all. I didn’t follow up on it. I was fine with the answer but curious to see no specific mention of the country from where I sent the prompt.
I learned global life expectancy is approximately 73 years and that if we (AI and me) assume people begin reading books in their teens, give or take a couple years, one’s reading life is about 60 years long. With this in mind, the Global Average Reader might get to roughly 522 books while the Avid Reader might read almost ten times that at 4350.
When I tried to take my search to the next level, which involved comparing the 130 million books in existence to the 522 books read by the Global Average, GPT failed. It had difficulties trying to visualize anything to do with the 130 million books. A few more tries resulted in reaching my daily ChatGPT limit, unless I paid $20 for ChatGPT Plus.
The data visualization caused a problem for ChatGPT Plus, too.
I asked if 130 million books was too many books and it agreed with me, “you’re right,” it said, “130 million books is a vast figure.“ I suggested we use a realistic number of books and it said “let’s break it down to a more realistic number of books.” It displayed stats on public libraries, online libraries, and e-book platforms. About 5 to 10 million accessible books, down from 130 million. We decided that 10 million would work going forward.
I thought maybe ChatGPT would be the one coming up with the ideas after paying $20 but no.
Here’s a bit of context. For some reason I have it in my head that each and every time I choose a book to read, I’m eliminating a certain number of potentially read books from my reading life. Like Sylvia Plath’s fig tree meets Intro to Algebra. I’ve always imagined this could be illustrated as a graph or a calculation, but I’m one of those people who told himself he couldn’t do math around the age of eleven or twelve and proceeded to not do math excellently through elementary school, high school, and university. Here’s what GPT arrived at:
I told the chatbot I didn’t understand the graph because I didn’t. What I didn’t say was I also didn’t appreciate the infographic style. All those books stacked up. The stylish lad with the tie. Actually, it’s kind of sweet that it came up with that image with no direction. I reiterated my prompt and asked for another graph.
And then another.
I used a few different strategies over the course of the conversation. I tried to be clear, I referenced what we’d talked about previously. I noticed how agreeable the chatbot was being and I got a little annoyed how often it told me I was right. Wasn’t the chatbot supposed to be right? I’m happy to be wrong.
My efforts to be as clear as possible exhausted me and the attempts at referencing our very own conversation almost always failed (it doesn’t seem like the chatbot has much of a memory) unless I copied and pasted exactly what it had said in a previous response in quotations in order to steer our conversation, and even then, the graphs wouldn’t translate. I also couldn’t be sure what I copied and pasted had invoked memory or were additional novel inputs.
Over the course of the morning, I had to breadcrumb my way into any kind of coherent answer, at least what I thought a coherent answer would look like.
I think this is one problem with the chatbot. How do I know it’s right? Perhaps all of its responses were technically correct based on my prompts, but then again, it’s awfully quick to realize its mistakes when told it’s mistaken.
I don’t know if it’s possible to articulate what I was trying to articulate. Maybe the graph or the calculation doesn’t exist; maybe it’s less mathematical and more theoretical and every book chosen alters our future reading life and in that way shuts off a hypothetical number of books from ever being read; maybe the graph does exist and there’s just nothing romantic about taking 60 years to read 522 books of the 10,000,000 available. Choose one and you’ve got 9,999,999 left.
I got the feeling I wouldn’t be able to get ChatGPT Plus to manipulate the data and illustrate how one book read is really more than that.
I didn’t gain any clarity until I told my wife what I was doing. I described what I’d been thinking about in far fewer words than I’d typed to ChatGPT Plus and only had to say “does that make sense?” once or twice. Talking with her made me realize that maybe I could work backwards. There I was, talking to a human being for free and making progress on how to think through my problem.
Eventually, AI and I got to cumulative effects and exponential elimination, and some visual representations made by ChatGPT Plus seemed on track, but there were just too many errors, ranging from nonsensical lines to typos, and I gave up. I had hoped AI would help me turn a half-baked idea into something more but the whole exercise felt cooked.
I think I got what I asked for but who’s to say?
The Exponential Cumulative Effect Of Eliminated Books Over A Lifetime:
The graph looks almost identical to the first graph Chat GPT shared. And, aside from the few minor, hallucinatory details, the first graph didn’t cost me $20.
I imagine any AI enthusiast worth their silicone could probably read my account and zero in on the error of my ways, of which I’m sure there are many, or argue (probably correctly) that I got what I was looking for all along, but I’m not sure.
My exercise with AI felt fraught, ineffectual, like a fool’s errand, and I still don’t feel any closer to my idea, in fact, I feel further from it. I don’t want to think about it at all. And if AI can’t help a fool, then who exactly is it for?












I might have asked ChatGPT how many books are deducted from my lifetime potential for every week I spend pretending to read Lonesome Dove. Just for additional data.
Also pretty cool you get to talk to your wife for free.