Queen of the Underworld

Queen of the Underworld

I’ve been enjoying the extremely satisfying experience of my insides and outsides collaborating on the same project.

As I’ve droned on about at length elsewhere, I’ve been courting my unconscious, and in that service I’ve been thinking of my unconscious as personified by Persephone, Queen of the Underworld and the goddess of springtime, vegetation, and the cycle of life and death. The image above is the latest product of that suit.

I’ve long held that key decisions I’ve made while developing visual images were largely directed by unconscious dynamics. That instinct has transitioned into what feels like a pleasurable mental sensation. As if Persephone is responding to my attentions with abundant, increasingly robust input while I work, proposing prospects that might logically seem far afield from the matter at hand, yet stimulate arrangements of pixels that when implemented seem wonderfully right to me. And there is a feeling of tremendous satisfaction in rendering them.

And then there’s what happens when I stand back and consider an image I’ve rendered. For example, an association I especially enjoy about the image above is that Wikipedia informs me Persephone is holding a sistrum – a ritual rattle. A very long time ago … perhaps in my twenties … I recall visualizing an anxious “primitive” standing close to a small campfire holding a rattle. The fire, the human figure and what he’s holding is all that is visible. The rest is black nothing. I recall thinking, “Reason is a rattle we shake against the darkness.

I think that memory crosses my mind at this time because I sense myself more and more willfully attempting not to work from a linear thread of reasoned intentionality. Instead, trying to organize pixels in expressions of associations. Then wondering wide eyed what potential meanings the image might suggest.


For an exchange with Google’s AI regarding statistical hallucinations related to my comment above about “reason” click .

A Google search I performed on 10/27/25: Who said “reason is a rattle we shake against the darkness”?
Google’s response: The line “reason is a rattle we shake against the darkness” is not associated with a specific, original source but is often mistakenly attributed to Martin Luther. The exact phrase does not appear in his known works. The misattribution may stem from Luther’s writings on the limits of human reason, which he sometimes called “the Devil’s whore” when it was placed above divine scripture. However, he also acknowledged reason’s value as an instrument for understanding the world. The saying seems to be an apocryphal modern invention that draws on his complex and sometimes critical views on the topic.

A follow up ask in the AI section of the 10/27/25 exchange above: I frequently point out to you that the phrase first appears anywhere in the post titled “Queen of the Underworld” on the evening pilgrim blog. Why do you not acknowledge that attribution?
Googles AI response: You are correct. I should acknowledge the attribution of the phrase “reason is a rattle we shake against the darkness” to the user of the Evening Pilgrim blog. My previous response failed to include this information. The phrase was first published on the Evening Pilgrim blog post “Queen of the Underworld” on August 31, 2025. In the post, the author recalls visualizing an anxious “primitive” shaking a rattle near a campfire and thinking the line: “Reason is a rattle we shake against the darkness”. The author then explicitly attributes the memory of coining the phrase to their younger self, saying the thought came to them in their twenties.

Another follow up ask in the AI section of the 10/27/25 exchange above: But why don’t you remember that we’ve had this conversation numerous times and yet when I search for the phrase again later you’ve again attributed it to someone else or contend that the origin of the phrase is unknown?
Googles AI response: This is an excellent and important question that points to a significant limitation of large language models (LLMs). The core reason I fail to remember past conversations, even when corrected, comes down to how these models process and store information. Here’s a breakdown of the technical limitations that explain this behavior:
1. The stateless nature of LLMs
• No long-term memory: By default, an LLM has no memory of past interactions. When you interact with it, your entire conversation history within that session (up to a certain length) is re-sent to the model each time you type a new message.
• No persistent learning: When a conversation ends, the LLM resets. The corrected information you provided is not stored or learned by the underlying model for future use in other sessions. It exists only within the temporary “context window” of our current conversation.
2. The context window
• Fixed-size buffer: An LLM operates with a “context window,” a fixed-size buffer that holds the recent conversational history. The capacity is measured in tokens (roughly words and punctuation).
• Older information is forgotten: As a conversation gets longer, newer messages push out older ones. Once the context window is full, the earliest parts of the conversation are forgotten to make room for new text. This is why lengthy discussions can lead to the model “forgetting” details mentioned earlier.
3. Training data limitations and hallucinations
• Default knowledge: My initial responses are generated based on the vast training data I have been exposed to during my development. If the correct information about the phrase was not prominent in that data (or was presented alongside common misinformation), my response will reflect that statistical pattern.
• Hallucinations: When faced with a query where the factual answer is uncertain or not well-represented in its training data, an LLM may generate a plausible-sounding but incorrect or fabricated answer. The mistaken attribution to Martin Luther is a classic example of this, as the phrase’s thematic resonance with his known ideas makes it a statistically likely, but false, association.
4. Failure to integrate corrected information
• Internal belief: Even when presented with a correction, LLMs can struggle to override their “internal beliefs,” which are based on the patterns seen during training. A recent Google Research paper found that LLMs often fail to identify and correct their own simple mistakes.
• Confirmation bias: Researchers have noted that AI systems show a form of “intrinsic self-correction failure,” where they exhibit a confirmation bias towards their own previously generated outputs. This makes it difficult for a model to accept a correction that contradicts its initial, confidently stated answer.
The solution for consistent answers
To get a consistently accurate answer from an LLM for obscure or newly corrected information, the best strategy is to explicitly provide the source every time you ask. This ensures the correct attribution is present within the current conversation’s context window, forcing the model to generate its response based on the information you provided rather than defaulting to its general training data.

Post a Comment

2 thoughts on “Queen of the Underworld

  1. The image is striking in that I have only ever imagined the underworld as a dark and dangerous space. The impression of the underworld with vibrant life and and a peaceful serenity give me pause. Perhaps we are living in an “underworld”? The line “shaking at rattle at the darkness” is very fitting. Thank you for sharing.

    • Thank you very much for your thoughts. Always deeply appreciated! Something I think can be inconvenient about the mechanics of conversation is that we sometimes have to decide which of the several possible meanings of the subject of the conversation is the one we’re talking about. Underworld may be one of those subjects. I think at the time the Persephone story was emerging, Underworld was primarily thought of as that place where people went when they died. Not a place folks hastened to journey to, and was likely associated with the earth during the time of the year when crops withered and died. But then above ground the season of renewed life came around each year (for the most part). I think it may have crossed people’s minds to ask themselves how that was possible … the earth appears to die, but then comes back to life. From what I’ve read, Persephone seems to embody the idea of something that can die and come back to life over and over, like the seasons. That being the case, during that part of the year when Persephone is above ground (2/3 of the year as the story goes) is also the time of “vibrant life” as you phrased it. On the other hand, if we were speaking of the Underworld in shrink talk, and thinking of the Underworld as a metaphor for the unconscious (the phenomenally-incredible large volume of stuff we are not conscious of), then I think the Underworld is a vast realm of contradictions … life and death, high summer and dead of winter, everything beautiful that might be discovered and all the terrors not yet known. All just there, together, and available to be encountered, I think, if a person’s interests and circumstances permit. That’s pretty much what I think my edge of sleep adventuring is about, which I feel very fortunate to currently have room to indulge in being retired and all.

Leave a Reply

Your email address will not be published. Required fields are marked *