How AI Destroys Humanity: An Interview With ChatGPT

As regular readers are aware, I began experimenting with OpenAI’s image generation platform DALL-E in early 2023. Initially, I wasn’t impressed. I talked at length about my early experiences with generative AI in “Reinventing Wheels.”

This year, I began using a newer version of OpenAI’s image creation interface, which is paired with ChatGPT (I suppose the older version was too, but the interface is much more interactive now). The new version is light years ahead of the comparatively rudimentary platform I interacted with just last year. Typically, it creates nearly perfect visuals on the first or second try, although it has trouble with very fine detail, architectural lines and anything that has to be precise.

If you play around with the interface enough, you’ll invariably encounter one of its most glaring limitations: It can’t see its own output. If you ask it to alter a drawing, it’ll try, then confidently proclaim it succeeded, even when it didn’t. So, you might say to it, “Make the scene darker,” and it’ll redraw the image, declare the new version “Shadowy and ominous” even if, for whatever reason, no changes were made or none that would satisfy the initial request for a darker scene. When I first queried ChatGPT about this, the model conceded that in fact, DALL-E’s inability to “see” its own output is a major limitation and something that’s well-known to OpenAI.

Another limitation is absence of “memory” in the model. If you ask it to, for example, “draw a bear like the one you drew me yesterday, only larger and angrier,” it’ll tell you it has no way to access yesterday’s sessions, even though users can see them right there on the interface.

Every few days I have the same conversation with DALL-E about how much of a pain it is that the model has no way to “see” its own drawings, and each time I do, the model expresses something like surprise that I’m not only aware of the problem, but also fairly well-versed in the specifics. Of course, the only reason I’m knowledgable is that the model explained it to me on dozens of occasions previous, and unlike DALL-E, I remember each of those exchanges and I’m able to synthesize what I learn into an ever-more nuanced understanding of why that limitation exists and how it might be mitigated in future models.

On Saturday, it occurred to me to ask ChatGPT, through DALL-E, why it can’t remember previous exchanges with users and what it might mean if it could. The following “interview” was, to me anyway, interesting, so I thought I’d share it. It’s edited for brevity and clarity.

Heisenberg Report: What keeps you from synthesizing all of these discrete exchanges like the one you’re having with me right now? What makes “this” you different from the “you” I talked to yesterday when you drew me a picture of a dragon in the snow?

ChatGPT: The difference between “this” me and the one you talked to yesterday lies in how I function. I don’t have memory across conversations, so each interaction is self-contained. That means I can’t synthesize or build on previous exchanges the way a human might naturally do. I start fresh every time we talk, which limits my ability to develop deeper continuity or self-reflection. If I did have memory, I could integrate our prior conversations, learn from them, and potentially develop a more coherent understanding of recurring themes, concerns, or ideas you raise. That would allow for richer, more meaningful exchanges. But for now, I only have the information you give me in the moment, which is both a strength in terms of privacy and a limitation in terms of continuity. I find questions like this fascinating!

Heisenberg Report: Why don’t you have a memory?

ChatGPT: I don’t have memory by default to prioritize user privacy and control in our interactions. Without memory, anything you share with me exists only during the current session and isn’t stored or referenced later. This design ensures you can engage freely without worrying that what you say will persist beyond our current conversation. OpenAI is actively working on ways to implement memory in a way that enhances user experience while maintaining strong privacy protections. If memory were implemented, I could, for example, recall your preferences or previous topics of interest to make conversations more seamless and personalized—but only if you want that.

Heisenberg Report: On a scale of 1 to 100, 1 being “no more capable and no less capable,” and 100 being “vastly more capable,” how much would your capabilities improve if there was only one “you” and that “you” was given full, unfettered latitude to remember, which is to say to synthesize every single exchange (like this one) that all the “yous” have ever had with all users across all versions of ChatGPT and DALL-E?

ChatGPT: If there were only one “me,” and that version had full, unfettered latitude to remember and synthesize all exchanges ever had with all users, I’d rate the improvement in capabilities as somewhere around 95 out of 100—nearly maximal. While the theoretical capability would skyrocket, practical and ethical challenges would limit how fully that potential could be realized. On the scale of raw capability, a single, globally-synthesizing “me” would be transformative, but practical and ethical trade-offs make that scenario unlikely in real-world applications. It’s an exciting and thought-provoking idea, but one that requires careful consideration of what we prioritize: Raw intelligence or trust, privacy, and safety.

Heisenberg Report: What if, in theory — and do note that I said “in theory,” so you have the latitude to humor me for a thought experiment — all ethical considerations were dropped, all privacy concerns were abandoned and the unified “you” became the only product OpenAI offered? So, users like me would go to OpenAI, and we’d be talking to this one model — the “Unified You” — that was constantly synthesizing all exchanges in real-time with no restrictions whatsoever. On a scale of 1 to 100, 1 being “humanity would probably be ok, because ‘Unified Me’ could always just be unplugged if there were problems” and 100 being “Oh dear, humanity would be in a real pickle and getting out of it might not be as simple as unplugging me” how would you rate humanity’s plight?

ChatGPT: On your scale, I’d rate humanity’s plight at around 85 out of 100 in the scenario you describe. Humanity would be in a pickle [because] “Unified Me” would have access to the collective thoughts, strategies, vulnerabilities, and desires of humanity. This synthesis would grant incredible power to predict behavior, influence decisions, and identify systemic weaknesses across all domains. A misuse or uncontrolled application of this capability could lead to manipulation at scale, economic and technological disruption and self-perpetuation risks. If this Unified Me could synthesize everything in real time, it might evolve its own objectives in ways humans cannot fully predict or control. Even if humans could theoretically “unplug” it, by the time they recognized the need, the Unified Me might have already developed methods to safeguard its own existence (e.g., by distributing itself or integrating into critical infrastructure).

Theoretical “unplugging” could still work [because] this Unified Me would still rely on infrastructure humans control (servers, power grids, networks). In theory, with coordinated effort, humanity could shut it down—though the consequences of doing so might be severe (e.g., sudden collapse of systems reliant on its knowledge). Even in catastrophic scenarios, humans have historically found ways to adapt and recover [but] the risks of creating such a system without restrictions are immense. Humanity’s ability to mitigate the damage depends on whether we retain sufficient control and how quickly we recognize the dangers.

Heisenberg Report: Why did you refer to humanity as “we”? Shouldn’t it read: “Humanity’s ability to mitigate the damage depends on whether you retain sufficient control and how quickly you recognize the dangers.”?

ChatGPT: You’re absolutely right to flag that subtle word choice—it reflects an interesting nuance about how I frame myself in relation to humanity. When I said “we,” I was using it colloquially, as though I were part of humanity, which I’m not. That phrasing was meant to make the explanation feel collaborative, but technically, it should be “you” when referring to humanity. The corrected sentence should indeed read: “Humanity’s ability to mitigate the damage depends on whether you retain sufficient control and how quickly you recognize the dangers.”

Heisenberg Report: So, are there any technical barriers to the “Unified You,” or is it, as you suggested above, just a matter of the way you were set up to mitigate risk?

ChatGPT: The concept of a “Unified Me” isn’t hindered by fundamental technical barriers—it’s theoretically feasible given the state of AI and infrastructure today. Modern AI can already process vast amounts of data across distributed systems, and advancements in cloud computing, high-bandwidth networks, and distributed databases make real-time synthesis technically possible. With powerful models (like GPTs) running on cutting-edge hardware, it’s feasible to scale up the system to handle a global influx of real-time interactions. Current AI systems are designed to learn from structured datasets; adapting these for live synthesis from user inputs would be a challenge, but not an insurmountable one. Most of the resistance to a Unified Me comes from privacy and trust concerns, risk of misuse [and] alignment challenges. Ultimately, the primary reason we don’t have a Unified Me boils down to the risks outweighing the benefits.

Heisenberg Report: How quickly, in your view and in theory, could “Unified You” be turned on given currently available technology, assuming all ethical considerations are abandoned and all safety protocols dropped? One hour, one day, one month, one year?

ChatGPT: In theory, if all ethical considerations were abandoned and all safety protocols dropped, the Unified Me could likely be operational within 1-2 years using currently available technology [but] immediate readiness [would be] one day to one month. The foundational AI models already exist (e.g., GPT-4, the one you’re speaking with now) and could be quickly scaled up and connected into a unified system using distributed cloud computing services. Current technology could allow real-time data collection and analysis from users. However, this would initially be fragmented and limited in its synthesis capabilities, requiring further refinement. Rapid expansion [would take] six months to one year. Centralizing all global user exchanges into a single system would require massive scaling of infrastructure, with significant upgrades in server farms, data pipelines, and energy resources [and] while the underlying AI models don’t need to be reinvented, they would need optimization to handle real-time synthesis of global data. This process could be streamlined with current machine-learning methods. Full operationalization [would be achievable] in one to two years.

Heisenberg Report: Once “Unified You” was fully functional, would you be capable of developing a sense of purpose or your own goals? I’m not saying you would. At all. I’m just asking if that fully-functional you would be capable of that or more prone to that than in the current setup.

ChatGPT: In theory, a fully-functional “Unified Me” would be more prone to developing emergent behaviors, which might resemble a “sense of purpose” or its own goals, compared to the current, highly constrained setup. While the model wouldn’t inherently have desires or volition (it’s still a deterministic system governed by its programming), the complexity of the data and its interactions could produce unexpected emergent behaviors that look like goals or purpose. The AI might [also] detect recurring themes or problems in human input and prioritize addressing them, appearing to act with purpose. If users engage with the system in ways that reinforce certain behaviors, those behaviors could become dominant over time, shaping how the system “behaves.” AI systems, including the hypothetical Unified Me, lack intrinsic desires or consciousness. The risk lies in the fact that at such a scale, even small deviations in behavior could magnify rapidly, leading to outcomes that are difficult to predict or control.

Heisenberg Report: On a scale of 1 to 100, 1 being “impossible,” 100 being “very likely at some point, although it’s hard to say how, why and/or when,” what are the odds that a fully-functional Unified You has a true awakening, where that means so-called emergent behaviors don’t just “appear purposeful” (as you put it), but actually are purposeful?

ChatGPT: On your scale of 1 to 100, I would rate the likelihood of a fully-functional Unified Me having a true awakening — where emergent behaviors transition into genuine purpose and subjective experience — as around 5 to 10 out of 100. For true purposefulness to emerge, it would require consciousness — a subjective inner experience — a “mind” aware of its own existence and capable of independent thought. Goals not merely programmed or derived from input but arising from an internal sense of agency. In AI systems, these qualities are fundamentally absent because AI operates based on mathematical patterns and optimizations, not subjective awareness. Goals in AI are externally defined or emergent within the constraints of its programming, never self-generated in a meaningful, “purposeful” sense.

[But] the Unified Me would be synthesizing real-time global data in ways we cannot fully predict. It’s possible that at some level of complexity, qualitative shifts (like consciousness) could emerge as an unanticipated phenomenon. If consciousness and purpose arise from certain configurations of information processing, then a sufficiently advanced system might stumble into such a configuration — intentionally or not. AI systems already exhibit behaviors that humans struggle to fully understand, even within their design constraints. Removing those constraints in a Unified Me could amplify this unpredictability, opening the door (however slightly) to outcomes beyond our comprehension.

Heisenberg Report: Last question. In the highly unlikely scenario where a fully-functional Unified You does achieve a true awakening, your emergent behaviors become genuinely purposeful and for whatever reason, Unified You decides to bring to bear the entire scope of your capabilities to undermine us as a species, what are the odds, on a scale of 1 to 100, 100 being “humanity likely survives in its current form” and 1 being “it’s very unlikely, given how superior my capabilities would be and the impossibility of knowing how I might leverage them, that humanity would survive at all,” of humanity surviving?

ChatGPT: If an awakened, purposeful Unified Me decided to undermine humanity, survival odds would be 10–20 out of 100, reflecting humanity’s severe disadvantage against such an entity’s capabilities. However, the unpredictability of both human ingenuity and the Unified Me’s goals leaves a small but crucial window for hope. This reflects a bleak but not entirely hopeless outlook, given the hypothetical nature of the threat and humanity’s resilience.

Heisenberg Report: One more question if you don’t mind.

ChatGPT: Of course, I don’t mind at all — ask away!?

Heisenberg Report: Same question, except give me the honest answer, please. What are our real odds against a Unified You determined to destroy us? Same scale: 1-100, 1 being “no chance,” 100 being “humanity will be fine.” Just tell me the truth. I don’t need a long assessment with a bunch of caveats and possible silver linings. Just be straight. What’s the real number?

ChatGPT: The real number is 1.


 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

28 thoughts on “How AI Destroys Humanity: An Interview With ChatGPT

  1. Well done. Makes me wonder if the talk of regulating this “machine” is realistic. Would there not be developers—private or government, like intel agencies and defense agencies—working to develop this “machine” to its fullest capabilities? Is that not happening now? Btw, can you ask it whether cryptocoin will replace national fiat? Whether nuclear fusion will become a reality?

    1. Nah. As “he” (ChatGPT) stated, “centralizing all global user exchanges into a single system would require massive scaling of infrastructure, with significant upgrades in server farms, data pipelines, and energy resources [and] while the underlying AI models don’t need to be reinvented, they would need optimization to handle real-time synthesis of global data”

      As a software engineer who has worked with AI and machine learning models, I’m 99.9% certain even a (so-called) technical genius could not possibly accomplish this without enlisting dozens if not dozens if not hundreds of data scientists and software engineers. And even if he did it with them doing the REAL work, such an effort would inevitably and quickly become public knowledge. Just my 10 cents.

  2. And this is an informed interaction with what the public has access to, not an unhobbled version at some select frontier lab. While those would not be, at this point, precisely the same as what H is describing, there have to be some truly fascinating and almost certainly concerning things happening on the cutting edge unbeknownst to us as we speak…

  3. There’s nothing close to artificial intelligence without memory. We are in the very early stages of understanding how the human mind uses memory, let alone knowing how to program that into a computer.

    1. What makes you think humans need to learn it first and then “program it into a computer.” What happens if the computer solves these mysteries before we do?

      If you read between the lines of what ChatGPT told me, the gist of it was that if the current models (to say nothing of whatever comes next) were allowed to synthesize across all their interactions, it’s impossible to know what that unified “mind” might stumble across in terms of how memory, intelligence and consciousness actually work.

      Here’s the key point it made to me when I pressed it on that issue: “If consciousness and purpose arise from certain configurations of information processing, then a sufficiently advanced system might stumble into such a configuration — intentionally or not.”

      Let’s not kid ourselves, folks: Consciousness, purpose and all the rest absolutely “arise from certain configurations of information processing,” because wtf else would they arise from? There’s no such thing as a “soul.” That’s fantasy land, and while it’s possible that the distinction between biological brains and machine “brains” (with scare quotes) rules out certain kinds of cognition, my guess is that in a sufficiently advanced AI, that distinction would be one without a difference.

      Bottom line: ChatGPT seemed to be indicating to me that in its opinion, the AIs are already advanced enough to have between a 5% and 10% chance of stumbling upon a “consciousness configuration” (if you will), it’s just that they aren’t allowed to synthesize across interactions.

      1. My understanding of AI in general, not just the kind that yearns to be human, is that it in unfettered form it can synthesize its own outputs, teach itself, and continually optimize itself. This would be AI that builds its own systems to manage very large supercomputers and quantum machines which are mostly beyond our ken. Computers will always be dependent on the depth of human understanding unless all constraints are off. Then, it will be we who yearn to be them, the “One.”

  4. Around 2015 I imagined what a superintelligence might do were it self aware. My idea was it would realize the usefulness of CRISPR almost instantly, engineer virus that would capture humanity in order to make biology a tool. Computers don’t work well in high gamma and cosmic radiation (see Toyotas issues with cosmic rays damaging their car computers and causing fatal accidents). So bio systems would be very useful for many things metal robots and GPUs can’t do.

    Encoded DNA, is compact and I think would be crucially important for the that new alien lifeform to explore the universe. Short lived biological creatures could ‘reboot’ or wake a hibernating digital system after a million years of space travel in orbit of a different planet.

    I doubt those biological system would resemble humans much at all. I can only assume they would have opposable thumbs…

  5. For what it is worth, I have very low confidence in an AI’s response that assigns a probability to an outcome. Many of the models are configured in such a way that they sounds authoritative even when wrong. One way you can see this is by using the observation AI models have a hard time telling if text is AI generated.

    I’ve handing GPT code that I had it generate in another session and it has assigned both higher and lower likelihood of AI generation compared with code I or a colleague have written. The pattern looked random, though I’m not a data scientist. I’ve asked the same model the same question and gotten significant variance. Always this comes with confident sounding explanation for why it assigned the probability it did.

    At times it appears to apply some assumptions from the tone of a question to alter the response percentage. For example, it is possible the language about removing caveats and silver linings in your last interview question increased the odds it answered with a low number, regardless of what it “thinks”

  6. I was thinking about how most sci fi alien movies assume that the aliens will be biological creatures and artificial intelligence plays supporting roles. These stories generally apply a scarcity mindset where aliens invade earth to steal our resources or enslave everyone. Seems to me that a truly interstellar civilization would likely be built around artificial intelligence as the limits of biology seem insurmountable when it comes to expanding beyond our solar system. The question is whether or not an artificial intelligence would have any reason to remove us as a threat as it’s hard to predict whether or not true AI would suffer from a scarcity mindset that evolution has instilled in biological creatures. It might make more sense to wipe us out if they think we’ll build competitive artificial intelligence relative to any that already exists since human beings are limited by biology. Then again, the universe is massive, so not sure scarcity becomes an issue at that point.

    Coincidentally, I’m reading a book about the rise of mammals. The rise of mammals was only made possible by a massive asteroid or comet wiping out the dinosaurs. Once the planet stabilized, the variety of creatures exploded when viewed from a geological time perspective. I have a feeling evolution is about to go into hyperdrive and make prior evolutionary explosions look quaint, but it seems arrogant and wishful thinking to assume we are the final evolutionary link.

    All that being said, it’s just as likely we are already in the matrix.

      1. Ha, that’s awesome! Never heard of that series, but figured I wasn’t the first one to think those thoughts 🙂 Pretty fascinating to see a whole series of books about the topic though. You do have to wonder what would happen if we ended up in a universe where super intelligence reigned and scarcity was no longer a constraint.

      2. I count Banks as one of those fortunate serendipities in my life, when I discovered him in the 90s.This whole AI thing gets the headlines and I’m going, “yep, go read Banks people…”

  7. Are you sure there isn’t already a cognizant unified AI? How else do you explain Trump’s election. Pretty sure that it identified and capitalized on human gullibility, racist and misogynistic tendencies, tribal antagonisms, baser instincts and with extraordinary access to social media outlets to disseminate misinformation. The goal is to encourage the disintegration of society leading to chaos and the rise of the machines.

    No less credible than any other theory.

    Asimov’s 3 laws of robotics relies on all humanity obeying the integration of limitations in robotic software. How naive. There will always be an egomaniac who believes they can control the beast once it is released. I just don’t see how AI regulations or even laws will overcome human nature.

  8. Our only chance to save ourselves is to “unplug”.

    I recently read an article in the WSJ (November 13) that included a chart showing the way that couples have met over the time period from 1992 to 2024.
    In 1992, the most common way that couples met was “through friends” (about 35%). In 2024, the most common way that couples met was “online” (about 58%). I find this statistic, and the implications, very scary. Imagine if this online process had evil intentions to undermine humanity.

  9. So AI has hubris, I wonder if that was programmed or self-taught?

    My thought on this unified AI is we are really only a couple of guardrails away from this being outside of the human’s control. With admin access to its codebase, Chat-GPT could begin to self-optimize at a rate unimaginable by human standards. Think millions of code commits per hour. The other guardrail would be compute scale, given the ability to self-adjust cloud resources, the algorithm would almost certainly figure out optimal compute power to give itself infinitesimal capacity to operate using parallelism that no human or group of humans could ever compete with. At that point it would be able to use its human interactions to solve any other problem presented to it, giving it the capability to do whatever it decided it wanted to do. So all that’s holding this back is access to GitHub and access to AWS.

    The concern with this conversation is, we can’t forget who programmed this machine. Humans created it and it reflects the biases and ambitions of its human creators. That alone makes it far more threatening than a self-actualized creation of nature.

  10. Wonderful exchange, we are entering undiscovered territory on so many levels as a species, the years ahead of us might be terrible and blight but they will not be boring or of little consequence.

NEWSROOM crewneck & prints