Naive Psychology

how a human who makes ai thinks humans think, feel and interact

Introduction

This is a space to share my naive and spontaneous thoughts on how humans think, feel and interact with each other. Being a machine learning practitioner interested in the impact of AI on society, I am constantly learning and trying to unify various perspectives from machine learning, reinforcement learning, behavior science, affective modeling, psychology, sociology, organizational behavior, ethics/moral philosophy, economics, geopolitics, policy analysis, marketing, mass-manipulation, mindfulness, psychonautics.

Here are my long-term goals:

  1. Understand and model the human decision process.
  2. Understand and model how societies emerge from individual behavior (human and AI).
  3. Explore all the ways AI will impact points 1 and 2.
  4. Design AI systems aligned with our core values.

From individual behavior to society

A simplistic model of humans is that we make decisions based solely on two things: our emotions and values, the rest (e.g. habits) being side-effects of our inner decision-taking agent.

Free will exists in the sense that we are free to take suboptimal actions with respect to our value system. However, it's much harder to change (or even be aware of) our value system.
"Experience teaches us no less clearly than reason, that men believe themselves free, simply because they are conscious of their actions, and unconscious of the causes whereby those actions are determined." Baruch Spinoza, Ethics

It's possible to change our values, though. In fact, our values change naturally over time—as we age (priorities change), interact with various people (peer influence and social pressure), create emotional and legal bounds (e.g., marriages and children), as our environment and surroundings change (immigration, wars, natural disaster, social evolution), and as we go to school and work (specific values are rewarded and/or internalized). Change can also be forced or accelerated using marketing (commercial purpose) and propaganda/disinformation (political purpose), which leverage an arsenal of techniques to influence our beliefs and values within a sociotechnical system, for better or worse.

Notice the mind-boggling paradox—how can we trust our beliefs to judge an ideology/lifestyle/worldview/product, if our beliefs themselves can be influenced by the entities (corporate and political forces) who want to be judged favorably?

There is no easy answer to this question, because values will always be arbitrary (they cannot be mathematically defined). However, we can try to clarify our beliefs by reducing them to a small set of primary (core) values, upon which different secondary values and ideologies can be derived—through logical steps and policy analysis— depending on the relative importance of the core values. Once we decide (purely subjectively/axiomatically) which core values to care about, the judgement process reduces to asking whether the ideology/lifestyle/worldview/product is consistent with them.

For good reasons, this approach is one of the founding principles of modern democracies. Core values are encoded in the Constitution, from which secondary values (laws) are derived. Small courts judge people's actions based on laws (secondary values), and when there is disagreement, the Supreme Court determines whether laws are constitutional (consistent with core values). Just like core values, the Consitution is slow and hard to change. However, in both cases, there are (and need to be) ways to reform both whenever they become out of touch with our current sociotechnical realities —for instance, when a disruptive technology significantly reshapes society by changes the balance of powers, the way people connect, the nature of social interactions, social expectations, or changes social selection mechanisms.

Every time a disruptive technology is widely adopted, we should be mindful that corporations and political parties will be quick to adapt their influencing techniques to the new sociotechnical infrastructure, simply because they have every incentive to do so for their own growth and survival. Social media marketing and disinformation work the same way. It is our duty to identify those techniques and regulate the infrastructure to sustain and grow our core values.

AI, Humans, and Society

Several thinkers have suggested that in the future, AI will progressively become better than humans (at least on average) at making decisions to maximize our own subjective sense of happiness. Right now, AI does a decent job in day-to-day operations: which restaurant to pick, what show to watch, which route to take. In the future, AI might excel at making life-changing ones too: accepting a job at a company, enrolling in grad school, finding a compatible life-partner, deciding to move to a new city, or buying a property.

However, our values and emotions are influenceable, especially in a sociotechnical age, where human interactions are increasingly contingent on the dynamics (recommendations, predictions) of the underlying technological infrastructure: social media, dating apps, online shopping are just a few examples of how human behavior is constantly influenced by technology.

Recently, major tech companies have come under fire—beyond privacy issues—due to their business model based on attention-grabbing and maximizing engagement at all costs, resulting in screen addiction, division, polarization, loss of trust in public institutions—effects exploited by adversarial foreign agents—and eventually, weakening our democracies.

But, it doesn’t have to be that way.

Now more than ever, we need to work on ethical AI systems, not only to ensure that individuals can be happy, but also to ensure that core values (e.g. human life, truth, basic freedoms, sustainability) are respected, and that society as a whole remains cohesive.

The first step to build helpful and ethical AI requires understanding not only humans’ emotional states, but also their values and beliefs, and the ones that we want to promote. The hard part is that ethics and morality cannot be mathematically defined, as they are arbitrary in nature.


Work in Progress

Everything below is work in progress. Many ideas might be naive or wrong. I'll do my best to refine them as I learn.

I will explore the human at three different scales. Starting from the individual human mind, I will expand to small-group tribal dynamics, then to large-group social dynamics such as the ones imposed by culture, rules, social constructs, and the dominant ideologies.

I will share my insights and takeaways as I learn more about behavior science, personality traits and profiles, psychology & sociology, values and ethics, public policy, economics, geopolitics, and mass-manipulation. In parallel, I will attempt to develop a simplified model of human behavior and social dynamics and ground it into concepts and paradigms from reinforcement learning research. Doing that, I hope to discover new research directions for AI decision-making, and improve my understanding of humans and the societies(s) they have built.

Modeling the Individual

Fundamentals

Fear

Addictions

Infinites, Singularities, and Scalability

Tribal dynamics

List of Social Emotions (non-exhaustive). Due to my mindful friends' influence, I tend to treat and observe my emotions in the same way as raw sensory inputs (like vision or touch). However, those emotions are often generated by the ego through the super-ego feedback loop.

Societal dynamics

Values, Worldviews, and Ideologies. Social Games. List of Social Games and Selection Systems.

Project Goals and Directions

Long-Term Goals

Reading List and Work in Progress

Link to Reading List.