REQUEST: Please do not react or comment on this post. Thanks!

WARNING: This content was largely generated as a stream-of-consciousness thought process barfed onto a screen through my fingertips. Proceed at your own risk.

I’m not sure what I want to write about. Now that seems false.

This world bores me. Now that seems true. Well, somewhat true and somewhat false. It’s true in the sense that thinking about it more without new information feels repetitive at this point. It’s not true in that focusing on my lived experiences moment-to-moment – especially novel experiences – is still quite enjoyable (most of the time).

What do I want to write about?

Well, I care most about humanity continuing to thrive and continuing to improve the degree to which it, as a species, thrives. And I think we’re very in danger of going the wrong way there with respect to Earth’s climate. I think we’re in some danger of going the wrong way there with respect to technology, especially artificial intelligence, and to a lesser degree, bio-engineering of ourselves. In both cases, I think the risk at an individual level is correlated to the amount of money that any individual happens to have (this is slowly becoming more true for the climate threat too), especially on the biological engineering front. Regardless, I fear for the species on the whole because I sense these two technologies as dangers to our species’ desires – in the sense that our species’ desires account for the desire of most individuals, like the desire to be relatively competitive with the rest of the individuals that they know about or at least that they interact with (also basics – not dying and having purpose and self actualizing and whatnot). These lines of thought are largely jiving with, and to some degree riffing off of Harari’s thoughts in Homo Deus. Maybe just read that if you’re interested in my non-fiction viewpoints most relevant to the 21st century. I should probably read that new book he got published. Anyway, I should maybe clarify that I’m not consumed by these fears, nor do I sense that they are the only ones. They are the three that seem most pressing during my lifetime, assuming I don’t live beyond age one hundred or so.

I also care most about learning what humanity’s desires are more accurately (not going extinct just being one of the most obvious ones). It would be superb to have a written answer from each of our several billion members what they wanted for the species over the course of the next hundred, thousand, million years and for our eternity. But, even if I had such summaries and even if such a summary took only 1 minute for me to digest, I’d need to live over 14,000 years before I was able to shit out anything meaningful at the other end of the process. In this sense, we require technology to understand ourselves. Both human and computer methods of processing and understanding this information would seem to lose a lot of the juicy detail that I’d want going about my daily life though. As satisfying as a summary of human thought patterns in order to identify planetary and stellar, galactic, universal -level goals might be, I would also want some assistive contextual recall of an individual’s understanding as I’m about to or currently interacting with him or her.

A part of me definitely wants to explore identity- expansion, auto-manipulation and merging. Most often I think about this through a technological lens, but I think it’s equally valid to think about it in terms of more traditional human relationships – the sort of everyday compromises that we ultimately store as part of our history and that are therefore likely to impact our identity in some way, however minor or subconscious of an assimilation/alteration it may be.

Another part of me wants to explore various conceptions of utopian constructs and what I consider to be an ultimate reckoning with population control mechanisms in the face of resource scarcity, which I believe to be fundamentally necessary for defining utopia. In any region of space-time constraining a species’ resources, lack of acceptable population control precludes utopia (as I see/define it).

I’m not even sure if this is remotely comprehensible/intelligible dribble spewing forth from my noggin this evening.

While vomiting this up, I had zero intent of sharing, but what the hell! At least it seems innocuous overall. REPLACEMENT REQUEST: Please, please let me know if you read it all by reacting/commenting, but only if you read ALL of it. 100,000 thanks unto you!

I’m not going to publish this continuation with the above content. Still, I’m writing it at the same time. So, maybe what I really want isn’t inherently focused on writing at all. Writing does still seem like a reasonable medium for prompting people to engage with sharing their desires for humanity, but so do any number of other content forms. AND, regardless of how the engagement is commenced or facilitated, there needs to be – and therefore it is even more important to have – a way to collect and process the engagement response. The feedback is what’s really important here. I guess prompting the feedback is very important, only much less so by comparison. It’s probably good enough to say that both parts are very important and leave it at that.

“John, we need to go, and we need to go now.”

Goliath was always in a rush, and besides, half the time it wasn’t really urgent. I decided my game of Tharce was more important. I was about to win after all. My fool opponent had just sacrificed their last two

Is there some way for a mysterious world rift to disappear and in the process, allow the rifted characters to be impacted by characters from the other side of the rift, even if there’s no logical reason that this would be the case? Yes, of course; that’s how fiction works.