In this post, I connect some of the concepts from psychology, systems engineering and systems management (as normative sciences), and the theories of agentic intelligence (aka normative theories of agency). These connections can serve as a basis for the ontology of the normative discipline of personal development, i. e., “engineering of the self”.

In the context of this post, I put aside the questions of collective intelligence and super-organismal agency (organisations, communities, societies, and civilisation) and consider humans as standalone, self-directed agents.

I’ll also sometimes refer to several intelligence architectures proposed for Artificial Intelligence agents (or empirically observed in humans by neurobiologists) in the aggregate, as a sort of nascent “normative intelligence science”. At other times I will focus on Active Inference in particular.

Preferences and goals decompose into sub-preferences and subgoals, which entails the hierarchy of intelligence levels (contexts)

Practical agentic intelligence architectures (Botvinick 2012, Pezzulo et al. 2018, LeCun 2022) are hierarchical in terms of goals (preferences), inference, planning, and control (these aspects of intelligence are all integrated with each other within their respective levels and task/goal contexts, so I will call them collectively as intelligence levels below). This is motivated both by the structure of the world (Vanchurin et al. 2022) and the considerations of computational efficiency.

On the highest level (”the root context”) of their intelligence, people have a set of preferences (goals) that they strive to satisfy.

Many of these preferences on the highest level are “evolutionary” (”evergreen”, architectural) concerns, which means that they cannot be satisfied and will be relevant regardless of what the person does and how it changes (Levenchuk 2022a). Examples of such preferences may be physical fitnesspeace of mindhappiness(continued) connection to the world (groundedness, presence)(continued) learning (development, growth)spontaneityautonomybeing a good parentlife partnerfriend, etc. Note that most of these preferences are recognised by Maslow as self-actualising characteristics. However, people may acquire or drop some evolutionary preferences (concerns) over time, just as any other preferences, in the process of preference learning.

Apart from evolutionary preferences, there may also be concrete goals on the highest level of intelligence, such as a concrete goal of raising a childearning a million dollars.

According to agentic intelligence architectures, each preference (goal) has its own sub-context, dedicated to realising this preference or achieving the goal. Thus, the hierarchy of preferences and goals (as known in psychology) corresponds to the hierarchy of the intelligence levels (contexts) in intelligence architectures.

Non-top-level intelligence levels, associated with realising some particular preference or achieving some goal could also be “evolutionary”. For example, the evolutionary preference of being a good friend may have sub-preferences of being a good listenerreliabilitygenerosity, etc. However, the lower the level of preference (goal), the less likely it has evolutionary sub-preferences, rather than concrete sub-tasks or sub-goals.

Psychological roles are intelligence levels

Psychological roles (also called sub-personalities, or parts in Internal Family Systems; confusingly, IFS uses the term “role” to denote a special categorical property of a part, which is unique to this theory) seem to directly refer to the intelligence levels (contexts) in the theories of agentic intelligence and the corresponding architectures.

Following Maslow, the psychological role of self-actualiser (unsurprisingly, the highest level in his hierarchy of needs) corresponds to the highest level of human intelligence.

Operational management and strategising are core intelligence disciplines, not roles

In systems management, the discipline of operational management is concerned with the allocation of resources and context-switching between sub-tasks (sub-goals, sub-preferences) in the course of achieving some goal or realising some preference. In intelligence architectures, this discipline (unlike, say, disciplines such as parenting or friendship) doesn’t correspond to any dedicated intelligence level but belongs to the core machinery of the architecture, shared by all levels of intelligence and determining the overall efficiency of the architecture.

Therefore, operational management should be seen as a core intelligence discipline (a.k.a. a transdiscipline), rather than a psychological role (except when people assume this role with respect to a system other than themselves, such as a professional organisation or a family). As a core intelligence discipline, operational management is close to rationality or even could be considered a sub-discipline of rationality if the latter is understood broadly.

Similarly, strategising in systems management is the discipline of reaching agreements between and prioritising the concerns and projects of product visionaries, product engineers, organisation developers, and technology and organisation architects. Strategising is performed by the role of a business executive, who is concerned with the overall value of the company (Levenchuk 2022b).

Here, we can directly map the concept of strategising on the minimisation of the expected free energy in Active Inference:

Therefore, we find that personal strategising should also be seen as a core intelligence discipline (or, as a sub-discipline on rationality), applicable to all intelligence levels, rather than a psychological role.

Reflection in psychology and management is online training in intelligence architectures

Efficient intelligence architectures with online learning need to amortise the costs of training computations (such as evaluation of the loss and backprop on model parameters), and thus some (mini-)batching. Naturally, such batching would go hand-in-hand with the hierarchy of goals and plans to achieve these goals because if plans are evaluated before the goal is achieved (or dismissed) the training signal will be biased. In personal and organisational learning and management, the “moments of batch optimisation of the parameters of one’s models” are periodic personal reflections and team retrospectives.

In the intelligence architectures which I’m familiar with, online training is not integrated into the core mechanism of (en)active inference, planning, and control, but is treated separately, although, as we have seen above, energetic affordances of learning are included in the expected free energy formula in Active Inference and thus they can motivate choosing particular plans and sub-goals, including sub-goals of “doing some reflection”. Therefore, the discipline of reflection, unlike management and strategising, is not a core intelligence discipline but corresponds to a separate psychological role.

Personal ethics is preference learning on the highest intelligence level of the person

As I noted above, people learn their top-level preferences on the highest level of their intelligence.

This type of preference learning corresponds to the discipline of ethics (a core intelligence discipline) in situations when an intelligence level (context) is “independent and self-directed”, though, in practice, it means that this context is subordinate to multiple different higher-level intelligence contexts (of realising higher-level goals and preferences). For example, people (as systems) are subordinate to their families, the organisations that employ them, their communities, and the civilisation. Ethics is the practice of creating a set of preferences in situations when the higher-level preferences of these supra-systems are in conflict (Levenchuk 2022c).

References

Botvinick, Matthew Michael. “Hierarchical reinforcement learning and decision making.” Current opinion in neurobiology 22, no. 6 (2012): 956-962.

Friston, Karl. “Affordance and Active Inference.” In Affordances in Everyday Life, pp. 211-219. Springer, Cham, 2022.

LeCun, Yann. “A path towards autonomous machine intelligence.” preprint posted on openreview (2022).

Levenchuk, Anatoly. “Towards a Third-Generation Systems Ontology.” (2022a).

Levenchuk, Anatoly. https://ailev.livejournal.com/1658046.html (2022b).

Levenchuk, Anatoly. “Ethics and systems thinking.” (2022c).

Neacsu, Victorita, M. Berk Mirza, Rick A. Adams, and Karl J. Friston. “Structure learning enhances concept formation in synthetic Active Inference agents.” Plos one 17, no. 11 (2022): e0277199.

Pezzulo, Giovanni, Francesco Rigoli, and Karl J. Friston. “Hierarchical active inference: a theory of motivated control.” Trends in cognitive sciences 22, no. 4 (2018): 294-306.

Vanchurin, Vitaly, Yuri I. Wolf, Mikhail I. Katsnelson, and Eugene V. Koonin. “Toward a theory of evolution as multilevel learning.” Proceedings of the National Academy of Sciences 119, no. 6 (2022): e2120037119.