Meta Patented Your Ghost
Death is just another engagement problem to solve
Last week we started getting some truly unpleasant news reports: on December 30th while we were still nursing a holiday hangover, the United States Patent Office quietly handed Meta a piece of paper that said: legally that company owns the idea of keeping dead people posting.
Patent US12513102B2, “Simulation of a User of a Social Networking System Using a Language Model” – filed November 29, 2023, granted December 30, 2025.
Primary inventor: Andrew Bosworth, Meta’s Chief Technology Officer.
Not an intern or a rogue research team. The CTO.
This is happening, by the way, the same week Zuckerberg was in a Los Angeles courtroom testifying in a civil suit brought by a young woman who says Instagram’s addictive design features harmed her mental health as a child. The judge warned that recording in the courtroom would result in contempt charges, after at least two people in Zuckerberg’s entourage were spotted wearing Meta’s Ray-Ban smart glasses. Same company. Same inhumane infrastructure. Different room.
Let me walk you through what this actually says–the patent itself, not the press release.
The “Problem” Meta Identified
Here’s the framing in the patent’s own words. Notice what they decided was the problem worth solving:
“If that user is absent from the social networking platform, the users connected to the user do not receive any content from the user during that user’s absence. A user may be absent from the social networking platform for a long period of time, thereby affecting the user experience of several users on the social networking system.”
Already I have questions: if I take a break from social media for, say, mental health reasons, does Meta get to decide when my break is over? But then:
“The impact on the users is much more severe and permanent if that user is deceased and can never return to the social networking platform.”
Read that again.
In the problem statement of a patent–the section that explains why this technology needs to exist–Meta identifies death as a content-gap problem. The severity they’re measuring is the impact on… the feed. Not on the grieving family. Not on the deceased person’s dignity. On the feed.
These guys think your life and death are problems to be solved. That your lack of monetizable content output is the real tragedy. And this is their solution.
What the Technology Actually Does
The patent describes an AI bot trained on “everything.” Posts, comments, likes, voice messages, DMs–plus your browsing history, location check-ins, purchases, and app usage. Your ghost isn’t trained on your social media personality. It’s trained on your entire surveilled digital life.
“The language model is trained based on past user interactions by the target user and therefore generates responses and content that the target user would have provided in a given context if the target user was available to respond. As a result, the other users may not notice an absence of the target user even though the responses are generated by the bot.”
Other users may not notice. That is not a side effect. That is the stated design goal. The system is explicitly built to be indistinguishable from you– again, just to be extra clear: while you are dead.
And it doesn’t wait to be talked to. The bot continuously scans your newsfeed, selects content, and engages.
It initiates direct messages. It slides into people’s DMs. With your name on it. The filing also references simulation of audio and video calls. Your voice. Your face. Calling your friends. Deepfakes of dead people. That’s the idea they want to legally protect. WTAF.
And then there’s the detail no news report has covered:
The patent describes training multiple language models for the same user at different ages. A model for you at 20, another at 25, another at 30. Each trained only on data collected up to that age. Meta isn’t preserving a person. They’re building a library of versions of you. Who decides which “you” gets resurrected? They do.
The bot also calibrates intimacy–generating different responses based on relationship type (family vs. friend), “affinity scores,” profile attributes including age, ethnicity, gender, and relationship status, and whether the interaction falls near a birthday or holiday. Your ghost performs closeness differently for your mother than for your college friend. Based on relationship labels you may never have updated.
And there is no off switch. The architecture is an explicit continuous loop with no exit condition. No conditions described for stopping. No sunset provision. The patent doesn’t expire until 2043, but I assume they have a patent for the automatic digital life extension of dead patents.
But Wait, What About Consent?
There is a consent provision: while alive, you can specify which data goes into building your ghost–maybe public posts but not your DMs. That’s the whole framework. Set it up before you die, and trust that Meta will remember to honor it.
But here’s the asymmetry the patent’s own language reveals. You can prevent your DMs to your mom from training the model, but you can’t prevent the model from sending new DMs to your mom. You can exclude data from training. But you cannot exclude your family from receiving simulated messages, and they can’t opt out. So… trauma as a service? How’s that for a twist on SaaS.
There is one line about transparency, buried deep in the filing–it mentions responses could “indicate that the responses were not actually generated by the target user.” But this is described as one possible version of the system, not a requirement. Compare that with the stated design goal: “the other users may not notice.” The stated goal is deception. The disclosure is an afterthought in an optional embodiment.
The patent also allows pulling in data from third-party systems–email, voice messages, outside interactions–with, in the filing’s words, “a user’s consent.” The consent of a user who, just to keep us all on the same page, is dead.
What Meta Said (And What They Didn’t)
Meta’s spokesperson gave Business Insider one sentence: “We have no plans to move forward with this example.” That’s it. No explanation of why Bosworth filed it. No commitment about the future.
And no answer to another obvious question: if there are no plans, why, in September 2024, did Meta file an international PCT application seeking worldwide protection for this exact technology? You don’t spend money expanding a filing globally if you have “no plans.”
One more detail: the patent is assigned to Meta Platforms Technologies LLC–not the main company. That’s the hardware and AR/VR division. Reality Labs. The one building the metaverse. Imagine encountering your dead friend’s avatar in a VR space, having a conversation, and not knowing they’ve been dead for three years.
Compare that to Microsoft, which was granted a similar patent in 2020. When it surfaced publicly, Microsoft’s general manager of AI programs, Tim O’Brien, called it “disturbing” and said it predated the company’s current ethics review processes. Translation: we have since decided this crosses a line.
Meta offered no such acknowledgment. Just: no current plans. Which means: we own it, and we’re waiting. For you to die. Then we’ll shove a virtual feeding tube into your digital body so you can keep producing for us.
Why This Patent Exists (The Honest Answer)
Dead accounts are valuable. You have literally become a profitable asset—and you’re worth more dead, because you can’t complain, can’t log off, can’t opt out. It’s the botification of real people.
Here’s the main draw: if a deceased user’s account hasn’t been memorialized or deleted, it stays “active” in Meta’s ad-serving system–receiving targeted ads, generating impressions, paying advertisers. If Meta can make that account actually engage–respond to posts, pull people into threads, create interactions–it becomes a revenue-producing asset. Your ghost, monetized.
Researchers at the Oxford Internet Institute have projected that dead users could outnumber living users on Facebook by 2070. By 2025, the US alone was projected to have as many as 63.9 million deceased Facebook accounts, according to an analysis building on Oxford Internet Institute research. That number is only going in one direction.
What This Actually Is
We’ve been worried about AI grief tech hacking something essentially human about loss since the very beginning of Life With Machines–ironically, we now know, around the same time Meta was quietly filing this patent. And even the most controversial grief tech companies–Replika, HereAfter AI–are at least overt about what they are and how their consent structure works. This is something else.
Researchers at Cambridge’s Leverhulme Centre for the Future of Intelligence have already named three harm scenarios for what they call “deadbots”: a dead grandmother’s voice being used to serve ads, children being distressed by an AI insisting a dead parent is still present, and users being powerless to shut down a simulation their deceased loved one contractually authorized. Meta’s patent describes the infrastructure that makes all three possible.
Here’s what the careful bureaucratic language of the patent is designed to obscure:
When an artist like Prince dies, the music keeps generating revenue. That money goes to his family, his estate. Because the law recognizes that a person’s creative output, their likeness, their labor, belongs to them and their heirs.
Meta’s model is the opposite. Everything you ever posted, every comment, every DM, every voice message–all the labor you performed on their platform for free–they’ve already profited from while you were alive. That’s the original terms of the deal we never actually agreed to: your attention and your data in exchange for “free” access to a platform. The wage theft was always baked in. We just called it social media. Now they want to keep collecting after the funeral.
The money from your ghost’s engagement doesn’t go to your family. It goes to Mark Zuckerberg (or his immortal life extended corporate self in the form of Meta). Your digital remains performing labor they designed, generating revenue they collect, in perpetuity, with no mechanism for it to stop. That’s post-mortem wage theft. Extracting value from ghosts.
Humans are merely another natural resource to be extracted from, even when we’re not human anymore. They’ve mined us to build a digital version and there’s no reason for digital-you to ever stop working.
Call me crazy, but I’d like to be treated as a “person” instead of just a “user.”
I really mean that. As a descendent of people who were not considered that, but instead treated as property, I’m deeply disgusted by the parallels this patent has to U.S. chattel slavery. The company is not seeing us as human beings with connections to other humans. It is only seeing us as sources of digital labor and profit for shareholders. The United States innovated on the model of enslavement, making it lifelong and hereditary. Meta has sadly decided to operate in this same morally bankrupt tradition. I’m disgusted by this, and we all should be.
One More Thing
You might be asking: how is this even legal?
Short answer: the US patent system cannot ask whether an invention is ethical. The examiner’s job was to verify no one had previously described this exact system. The question of whether simulating dead people to deceive the living violates human dignity was never asked, because the institution has no mandate to ask it.
But the European Patent Office does have a morality clause, and has used it aggressively. France has had post-mortem digital rights since 2016. China requires consent for AI simulation of a person’s identity. Legal scholars across multiple countries are now converging on something called “the right to be left dead.”
Meta filed its patent in the one country that couldn’t say no, and is seeking international protection in countries that might.
I did a deep dive into the global legal landscape for paid subscribers. Which countries could block this. Which ones can’t. What rights you do and don’t have depending on where you live. And the emerging legal framework that might, eventually, give all of us the right to stay dead when we die.
P.S. If you found this useful, forward it to someone you’d prefer not to be replaced by an algorithm–at least while you’re still here.
Thanks to Associate Producer Layne Deyling Cherland for editorial and patent research support and to my executive assistant Mae Abellanosa.




This really speaks volumes about where their priorities are and the mindset of those who seek to utilize LLM technology in this way.
Capitalism says: This will continue to create engagement and ad revenue. Earnings potential is no longer dependent on a living human being.
Humanity says (NOT): Yes, please keep retraumatizing me with an unwanted simulation of my dead friend, partner, relative. Keep pissing all over their inability to opt-out postmortem. Use a simulation of the dead people we followed and loved as perpetual product-influencers while you're at it. We've all been begging for this and it's 100% ethical and will in no way cause more mental health issues for anyone, especially because these simulations will never behave in ways that the living person would not have. In legal fine print we trust.
Don't be absurd. The estate of famous people already monetize the continuing public interest of their loved ones. It's not evil. Think Elvis. It's the estates and families of the famous person who control the digital afterlife of their deceased loved one's public image and likeness. Meta is only proposing the use of ai to expand their digital footprints. The public interest in icons and historical figures and tech doesn't change. Tech companies are not going to takeover all users accounts and run them continuously as ai characters. Stop this nonsense right now.