Modeling and Emotions

Last week, I introduced the concept of modeling and used it itself as a model for what the human mind does. It sounds self-referential and nonsensical at first, but there's no mathematical reason why a universal modeling system couldn't model itself. In fact, you can ask ChatGPT about the transformer architecture and how to train it and it will give you lots of information. I think this observation holds a lot of beauty and has many applications in understanding the human mind. Yes, that's right, I think machine learning can be applied to philosophy. Not in the sense that we train machine learning models to do philosophy, but in the sense that we use language borrowed from machine learning to reason about the modeling process that is a model for the conscious mind.

So today, let's look at emotions and how they might fit into this concept. If we were a machine learning model, one might say they serve the purpose of the loss function. But, since we are not machine learning models and our emotions don't output a single loss value, a bit more explanation might be needed.

In the model I want to propose, the main purpose that our emotions serve is to give us something to optimize towards. And when I say emotions here, I really mean the whole of the endocrine system. That system is huge and complex and probably also incorporates parts of the brain. It's not unlikely that this system performs a modeling process as well that is hidden to the conscious mind. A fun observation is that, if this is true, it probably has a model of the conscious mind. Although I have no idea if that theory holds, one thing I'm certain of is that the endocrine system is more intelligent than most people think and you should not underestimate it. The interesting part is the way it interacts with the conscious mind. By manipulating the release of chemicals throughout the whole body, it dynamically changes the hyperparameters of the modeling process that is the conscious mind.

I have never heard about a machine learning approach that uses a separate model to vary hyperparameters during training, but I'm not exactly doing literature research for these blog posts, otherwise I couldn't write one every week. But if you're a machine learning researcher, how's that for an idea? Anyways, what we perceive as emotions might be the effects of our endocrine system trying to steer us into a certain direction. This is obvious in extreme, primal, situations where fear causes you to run away, anger causes you to attack or hunger causes us to eat. But these emotions exist outside of those extreme situations as well, gently steering you in certain directions. Often you are not even aware of it.

There's one problem though, which I've talked about before: while the endocrine system is surprisingly intelligent, it has less understanding of the world than the conscious mind. The purpose of the conscious mind is to understand the world better. That's why we have the ability to overcome our emotions, causing them to learn what is appropriate to be afraid of, what is acceptable to get mad at and what is safe to eat. In a sense, the conscious mind is serving the role of the loss function for the endocrine system as much as the other way round.

This concept is not as strange as it might sound. Something like this exists in the world of machine learning as well, in generative adversarial networks. I didn't mention this term before, because I didn't want to set the wrong premise, though. Importantly, the conscious mind and the endocrine system are not adversaries. They're like conjoined twins where one controls the left side of the body and the other the right side. They need to collaborate on any task involving both sides. They spend every second of their life together, both equally responsible for their relationship, the most important relationship they'll ever have.

I'm simplifying a lot here. I made it sound like those are the only two parts, which is probably not true. I'm also using the terms "conscious mind" and "endocrine system" quite haphazardly to describe these two parts without properly defining them. I made a lot of dubious claims without proper justification. I would love to get into this theory in a more rigorous way, but I have to remind you that this is a blog post under a web comic, not a scientific document. I could probably write a whole book about this but, knowing myself, I likely never will.

I might get further into this modeling perspective in future blog posts, though. I think it has a lot of potential and is fun to think about. It can be applied to many questions that philosophers struggle with and I feel like it brings a fresh perspective that makes some things a lot easier.

But what do I know. I'm not so pretentious as to call myself a philosopher. I'm just a guy who likes to think about stuff. I write these posts for fun. If you find them fun to read, enjoy coming along for the ride.

Comic transcript

Panel 1:
H and their therapist are still walking through the strange land in H's mind.
T: You don’t even know where we’re going, do you?
H: Long term: no. Short term: That lemonade stand there. I’m thirsty.
Panel 2:
They walk up to the lemonade stand
H: Can we have some lemonade please?
L: Nope. Sorry.
H: What? Why?
Panel 3:
L: I will only give you lemonade if you go to the office and fix a bug in that file storage software you’re working on.
T: Why do you care about that?
Panel 4:
L: The file storage software is part of a database application that carpenters use to store patterns they can cut into wood using milling machines. They need to cut these patterns to produce large quantities of the characteristic coffee tables used by the coffee shop chain which is the favorite place for my brother in law to get coffee. They need the tables to open a new shop in my home town and, once they do, my brother in law will finally be willing to move there and help me care for my lemon trees which will make them produce better lemons so I can make better lemonade.
Panel 5:
T: Can’t we just give you money?
L: What is ... moo ... nay?
T: Oh. I think I’m beginning to understand.