Retired software engineer with a love of knowledge and disinterest in dead philosophers.
People have very different ideas about when "the future" is, but everyone is really thinking extreme short term on an evolutionary scale. Once upon a time our ancestors were Trilobites (or something just as unlike us). If you could have asked one of those trilobites what they thought of a future in which all trilobites were gone and had evolved into us, I don't think they would have been happy with that. Our future light cone is not going to be dominated by creatures we would recognise as human. It may be dominated by creatures "evolved" from us or maybe from our uploaded consciousness, or maybe by "inhuman" AI, but it's not going to be Star Trek or any other Sci-Fi series you have seen. Given that future, the argument for minimising P(doom) at the cost of reducing P(good stuff for me and mine in my lifetime) looks pretty weak. If I am old and have no children, it looks terrible. Roll the dice.
I don't see anything about exactly WHAT people were reading. Literacy, certainly nowadays, is not taught so that the masses are better able to experience classic literature but to enable them to transmit and receive factual information and instructions efficiently and effectively an this usage is inherently more suited to shorter sentences. We now live in a time when the fraction of all knowledge that anyone can ever hope to ingest is declining exponentially and so we benefit from greater clarity and higher information density whilst, simultaneously, for whatever reason, reading for pleasure is in decline.
Always try reversing and rephrasing things to see if they still make sense: "I want a toothbrush that's more durable than my teeth" sounds kind of silly.
I think you overstate how wonderful hunter-gatherer life was, even in the good times. No, you didn't have to work 60 hour weeks and suck up to the boss, but you did have to conform to the norms of your tribe or you would find it impossible to get a wife, be shunned by, what was effectively, everyone in existence, or even be cast out to die, alone. Getting on in modern society is much less onerous.
and yet you do not identify any of these, supposed "other useful properties". How can you reconcile a prediction of algorithmic breakthroughs with reality? When would that reconciliation take place. Nobody is ever going to look back and say "I predicted algorithmic breakthroughs and there were none". At best they'll say that "the breakthroughs took longer than I expected but my predictions were good if you ignore that".
From a practical perspective, maybe you are looking at the problem the wrong way around. A lot of prompt engineering seems to be about asking LLMs to play a role. I would try to tell the LLM that it was a hacker and to design an exploit to attack the given system (this is the sort of mental perspective I used to use to find bugs when I was a software engineer). Another common technique is "generate then prune" : Have a separate model/prompt remove all the results of the first one that are only "possibilities". It seems, from my reading, that this sort of two stage approach can work because it bypasses LLMs typical attempts to "be helpful" by inventing stuff or spouting banal filler rather than just admitting ignorance.
The CCP has no reason to believe that the US is even capable of achieving ASI let alone whether they have an advantage over the CCP. No rational actor will go to war over a possibility of a maybe when the numbers could, just as likely be in their favour. E.g. If DeepSeek can almost equal OpenAI with less resources, it would be rational to allocate more resources to DeepSeek before doing anything as risky as trying to sabotage OpenAI that is uncertain to succeeed and more likely to invite uncontrollable retaliatory escalation.
The West doesn't even dare put soldiers on the ground in Ukraine for fear of an escalating Russian response. This renders the whole idea that even the US might premptively attack a Russian ASI development facility totally unbelievable, and if the US can't/won't do that then the whole idea of AI MAD fails and with it goes everything else mentioned here. Maybe you can bully the really small states but it lacks all credibility against a large, economically or militarily powerful state. The comparison to nuclear weapons is also silly in the sense that the outcome of nuclear weapons R&D is known to be a nuclear weapon and the time frame is roughly known whereas the outcome of AI research is unknown and there is no way to identify AI research that crosses whatever line you want to draw other than that provided by human intel.
Complacency! Try visiting a country that hasn't had generations of peaceful democracy - They take these issues much more seriously. The optics of this are heavily skewed by the US, who had, essentialy, the same religion and politics for centuries and so they believe that none of the serious stuff consequences could ever happen to them.