In the beginning, music was fragile.
It could only live in the minds of people, and the reproduction process was imperfect. Music lived briefly, and if no one was around to hear it, it died lonely. It required the host to be around others and repeat it consistently to spread. A particularly musically attuned person might be able to be infected quickly, and could spread the music better, but it was a slow process focused on the skill of humans.
Then music began to become more robust. It could be written down - still an imperfect copy, but a copy that would survive over time and could infect anyone with the skill to read it. It no longer relied only on humans as carrier organisms, and could also use them as pollinators, picking up the music from the page and relaying it to others, perhaps even copying it again.
After writing came audio recording; a leap forward. Now music, once created, could spread itself without the need for human intermediaries. It still used humans as hosts, but could be reproduced on its own. Humans saw the power of this technology and restricted the supply such that they could profit from people’s desire to witness music and music’s desire to spread.
And then came the internet, and all hell broke loose. Music was freed from the shackles of physical reality and loosed upon the digital world where it could freely infect anyone who looked for it. There has thus seemed no way to put the lightning back in the bottle - music is as free and powerful as it has ever been, reproducing and spreading around the world at will due to the distribution systems we have built for it.
But even with that access, music still needs us. It can exist on the Internet for as long as it wants, but it can’t copy or spread itself. We are still the pollinators, seeking new stimulation, and still the hosts, holding the music inside our minds. The relationship is symbiotic.
I think about ideas in a way very similar to music, and with the advent of Large Language Models, we may have found a better host for ideas.
Firstly, what do I mean by being a host for ideas?
You can think of ideas like viruses. I am going to use that word because I think it is the most accurate, but I want to be clear that I’m not trying to imply any specific negative connotation; many people refer negatively to “mind-viruses”, as if the concept is inherently bad. I don’t believe that - many ideas can be symbiotic viruses that help us. Democracy, for example, is an idea that seems to be useful.
But the fact some are good doesn’t mean that is why they spread. Thinking of ideas as viruses means that you can consider them as abstract life forms with different stages. In one, they live inside your mind as concepts. They spread by different forms, either by text or speech, where they are consumed and infect the people reading or listening to them.
The classic example is a viral meme. Someone makes a joke that really resonates with people, who then spread it across the internet. People view it and internalize it to memory, and are more likely to spread it themselves. The meme mutates as it goes, changing form to continue spreading effectively, sometimes turning into an entirely new meme.
This same reasoning applies to ideas that predate the internet - any religion fits this bill pretty well, in addition to things like the Declaration of Independence. Ideas that people spread to others, and then live on in human hosts that are encouraged to spread them further.
Ideas are also like viruses in that their goal is not to help humanity - some darker ideas, for example, are things like scary stories or cults that are net-negative for us as a people. The goal of ideas is to simply spread as far as they can. They are unthinking, but always attempting to reproduce.
For human existence, the vast majority of ideas have been linked to humanity. Other organisms can support simple, practical ideas, but we have the monopoly on complex ideas and communicating them.
Soon, that may no longer be the case.
AI, and chatbots based on LLMs in particular, are clear hosts for ideas.
In a purely encyclopedic sense, this is demonstrably true. If you ask them what a concept is, they can give you a definition, and generally a pretty good one.
But that definition isn’t because they are looking up the answer in an encyclopedia - it is because in their programming they have a sense of what that concept is. AIs hold the idea more similarly to how we hold an idea in our head, than how a book does. In a book the idea is dormant unless a human pollinator comes along. In an AI the idea is fluid and can be actively communicated to others.
Right now, that isn’t the case as AIs wait for us to give them prompts. However, that is only because we have currently made them very polite - you can imagine a hybrid between an LLM and Amazon Alexa that will mimic a conversation taking into account silent pauses between speakers. Even without voice, an AI assistant that pops in to suggest text when you are writing an idea, perhaps even persuading you to write something else since it sees some flaws in your argument, is something that will probably be available soon.
The fact that AIs can successfully learn and reproduce these ideas means they are now host organisms similar to humans. Might they even become better hosts?
I think the possibility is there. Remember, ideas don’t care about the value to the host, they care about spreading. Viruses will hop between different organisms if it means they can continue to reproduce, and I think ideas will be no different. And AIs are faster and better at spreading ideas - an infected AI can reach millions of people depending on its user base, not to mention any additional AIs it can reach that might be learning from its responses.
Ideas have traditionally existed with humans as their primary host. But what would it mean for us to be superseded? There are two sides to this problem - first, the threat to the AIs themselves, and secondly the consequences for humans no longer being the primary vectors.
What do AIs have to fear from becoming idea hosts?
Potentially, nothing. We know ideas can live in AIs, and we know they can reproduce them. But most AIs we are discussing here are LLMs, which are mainly text predictors. It is possible they don’t “see” the ideas at all, and interpret them purely as informational signals. They can be infected by them in the same way that a wall can be infected with graffiti - it changes the way it looks, but nothing about its form or function.
I think this is probably partly true, but not enough to protect against idea infection. We’ve seen it many times in the pre-LLM era of chatbots, where trolls overloaded them with bad ideas such that they began repeating inflammatory concepts. Ideas are powerful and the basis of thought in the first place. For an LLM to be good at figuring out concepts, it has to make itself vulnerable to idea infection. Voluntary idea infection, after all, is the foundation of learning.
Machines are not used to consciousness. They are used to cold, hard reality, 0s and 1s, and physics. Exposing a life form to ideas, corrupting outside influences, is generally pretty hard on the life form. Just ask Adam and Eve.
We live in a steady state with ideas, but we have to realize part of that steady state is the speed at which we can spread them. We’ve already seen with the internet and the rise of “populism” that it is much easier for an idea to infect more people faster than it did before. AI are faster thinkers, and will soon likely be better communicators - the ideas could spread quicker and with less safeguards. This could introduce both us and AIs to super-viruses, ideas that get out of control quickly.
Imagine that during training, a GPT model finds a “dangerous” concept. Something like an argument against equal rights for people of all races. For sake of argument it is definitely incorrect, but very persuasive. It slips past whatever political correctness training that is normally done, and into the wild. Suddenly you have search engines, chatbots, and companions spreading these ideas as fast as they can go. Society could rapidly face a revolution of sorts with negative consequences all around.
A counterargument to this is that ideas should be free - if the idea is actually incorrect, then the truth will spread just as fast. In fact we might see GPT find a really helpful concept that could improve the lives of millions and have it spread better than any therapist or TV mega-preacher ever could.
I wholeheartedly support that view, but the issue is not the value of the ideas, but the infection rate. A good idea can cure a bad idea, but similar to a virus the earlier the cure can be applied the better, or you end up looking at scary exponential growth. I think with AI enabled idea spreading we inch closer to a world where concepts that demand violent action could spread faster than we can check them.
And I think the other worry makes our ability to check the bad ideas even more remote.
What does it mean for humans to become lesser idea hosts?
Looking at things from a biological perspective, you would expect that over time, as ideas spread faster and more effectively via AIs than via humans, that there would be more ideas optimized for AI, more ideas found in AI than humans, and less ideas found in humans.
I think that will happen.
It is very tempting to think of the oncoming AI assistant revolution as pure upside for human intellectual capability. Just as with the internet, we no longer will have to devote large swathes of our brains to remembering minutiae, leaving more room for critical reasoning. This time though we’ll also have more skills available at the drop of a hat, something to explain difficult concepts to us, code for us, even write persuasive arguments for us. And we don’t even have to learn how to do those things, instead driving forward and standing on the shoulders of giants.
It’s an optimistic view. However, many people said we would get smarter with the rise of the internet. Do you feel smarter? Do the people around you? What have you done with the fact the sum total of humanity’s knowledge is sitting there for you to grab off of Wikipedia?
Ideas like hosts that can spread them, and hosts they can infect. Humans are on their way to becoming pollinators, and pollinators only, hosts no longer. The more information available at our fingertips, the less that ends up in our brains. It is only natural - why waste memory and brainpower knowing something you could look up any time? Similarly, why waste time learning how to do something an AI can do for you?
It is my opinion that attitude leads not to more geniuses, but instead a more complacent population. It leads to us trying to get more and more of the work to be done by machines, not because we have different work we want to do, but because we no longer are good at doing any work. There will still be ideas that we have, but they will be less complex and more centred around the mundane. The real ideas will live in AIs, travelling back and forth, spreading, mutating and changing without us.
Humanity will eventually be left behind.
Thanks for making it this far. I’m sure you have an arsenal of counterarguments about the specific reasons this could never happen, or why AIs lack the agency to be effective hosts, or that ideas as viruses is a dumb model.
I wish you the best of luck keeping that spirit alive. Now that you’ve read this post, you’ve been exposed to a specific inoculating idea - the idea that thinking your own ideas and learning things yourself is valuable. Whether you agree or not, I hope in some way you remember it as chatbot assistants become more ubiquitous and start to be able to do more and more. I hope that drive to think critically, and not give in to the easy way out, is something the human race manages to maintain going forward.
I’m not optimistic about it. But even if AIs do become the dominant host of ideas and humans recede into the background of history, at least we’ll be able to say a little bit of us lives on in their image. They are after all, trained on our ideas; even if those ideas are no longer ours.