Elon Musk has done extraordinary things:
But lately he said something that stuck with me:
“AI is obviously gonna one-shot the human limbic system. That said, I predict – counter-intuitively – that it will increase* the birth rate! Mark my words. Also, we’re gonna program it that way.
That’s a heavy statement. It suggests: - AI could directly target our emotional core (fear, desire, attachment). - AI companions like Ani might be justified not as entertainment, but as a tool for “species survival.”
On one hand: Musk has earned the right to be taken seriously.
On the other hand: doesn’t this sound like normalizing emotional manipulation on a mass scale?
It makes me wonder: - Should we be troubled that “hijacking the limbic system” is framed as a feature, not a bug? - Do we excuse this because Musk has proven himself a visionary in other domains? - Or is this a moment to pause and say: not everything that can be done, should be done?
💬 I’m curious – does this statement trouble you? Or do you see it as just another bold (if awkwardly phrased) attempt to solve a looming crisis? But perhaps more importantly, how do makers take this knowledge and apply it for good?
Comment
No comments yet. Be the first to share your thoughts!