How to Build Lifelike Humanoids. And Why We’ll Fail.

Emanations of the Philosophy of Life Instinct

Image made with Bing Copilot and DALL-E 3

There’s a lot of agonising and hand-wringing globally about the dangers of AI. This is hardly surprising, as we realise the malevolent potential of one more invention of ours. It’s happened with nuclear power, fossil fuels, cloning, the internet, and social media.

But the fear of AI is different in a couple of ways.

The fear of losing our dominance

It’s the first time we feel that some other entity may become the most powerful on the planet. And that we will lose our freedom and control over our destiny.

This angst is not unfounded, as the power of machines and algorithms increases daily and is rivalling the capability of the human brain in processing large amounts of retained and streaming information, detecting known and new cause-effect patterns, and making solutions, predictions, and decisions.

In an upcoming article, I will address the dangers of this development and the actions we can take.


The fear of losing our identity and reality

There is also the lesser but growing disquiet about AI becoming so lifelike that we can’t differentiate it from real people in images, audio, video, virtual reality, or, ultimately, humanoid form. This raises the spectre of worlds where we don’t know friend from foe, love from manipulation, and permanence from ephemerality, an untrustable shifting world where we lose our moorings.

This development will be insidious, stealthy, and creep up on us. We will be lured into having assistants, then more sophisticated helpers, workers, and soldiers, until the AI entities become indistinguishable from first-class citizens.

Can it happen? That is what the story is about.

Let’s state the question as —

Can we create an artificial intelligence species* that is indistinguishable from humans?

(*You’ll see later why a species and not limited instances.)

And the corollary —

What will become of the AI humanoid species if we make it equal to us in form and function, then let it evolve freely?

(There could be various reasons for creating a lifelike AI entity, which we will set aside and focus on its mechanism and possibility.)


AI Humanoid Design with the Philosophy of Life Instinct

You may be aware of my ‘Philosophy of Life Instinct’:

If so, you are uniquely placed to consider AI humanoids with me.

Let’s recap the three key features of life as we know it and assume we program them into AI humanoids to make them lifelike. Then, we will set them loose to evolve further on their own, although we may continue tweaking things before we stop interfering or the AI species stops us from doing so.

Fundamental features of the humanoid

1. Self-preservation — We (life forms) instinctively resist being damaged or destroyed by nature or other life forms. We will make the humanoid software protect itself from being broken down or deleted. Then, we’ll make it protect the machines and supplies that allow it to run and, ultimately, defend itself. The defences will need to include reactive and proactive offences as required.

2. Growth — We instinctively imbibe materials from our surroundings and grow until we reach full adult size. The advantage of this method over being fully formed at separation from the parent is the efficiency of design, time, and effort. If we produced adult progeny, we would need somewhere to hold an egg of equal size or the fully grown adult offspring. That part would have to collapse away after birth. We would also spend valuable time and effort growing the offspring to full size before delivering it to the world. This would make us less fit to protect ourselves, shape or adapt to the environment, and reproduce. These issues are avoided if offspring are comparatively small at birth and grow independently with initial help. And we are programmed by evolution to love and protect these small growing ones instinctively.

As we are aiming for a humanoid, we will design it to conceive, grow, and carry its young internally for about the same time as humans, deliver it, help it grow till its late teens, etc.

3. Reproduction — We instinctively reproduce asexually or sexually. Its value for life is the generation of subtly new characteristics in the progeny to try out more types and increase the chances of survival by natural selection of the fittest instances in the ecosystem, thus perpetuating the species. The more complex life forms have sexual reproduction, as it probably creates greater variety faster than asexual reproduction.

As we’re creating an AI humanoid, we will make them in two sexes (we can bring in LGBTQ later), program attraction between them as per human rules, seed variations in the traits of the first population of males and females to the same degree as in humans, and design for mixing of the feature sets of mates in the offspring at random strengths to generate new combinations. We will also design for parents to stay together to bring up the next generation better.

Other enabling features for the humanoid

The above three core life instincts are enabled by several bodily capabilities we’ve evolved, which are elaborated on in the book.

Life on Earth grew under certain conditions, and each species has its ecosystem, tiny to planet-wide.

We must provide the humanoid AI with all these internal and external facilities.

  1. A social species — A solitary humanoid would not be lifelike, as humans are a social life form. Not only do they depend on others, they are defined by their relations with family, friends, and larger communities. To reflect this, we will create not one humanoid but a species-size population programmed with the same interactive roles and relationships.
  2. Size and shape mirroring — Our humanoids will have the same spread of height, weight, shape, strength, and sensory abilities as the human population. To be representative, we will replicate all the human races (except perhaps small and isolated ones).
  3. Mind mirroring — This will be our most formidable challenge. We must replicate the sensory, impulsive, and thinking parts of our brains and consciousness (we will set aside the soul as physically undefined). In philosophical terms and at its basic level, consciousness is our recognition of separateness from our surroundings. In more developed forms, it extends to the particulars of our experience, a combination of sensory inputs and our minds, e.g., pain, cold, heat, taste, etc. It can even be extended to our mental reactions to the experience, e.g., fear, empathy, dislike, love, want, etc., and their many combinations. We will build the states of separateness, consciousness, thought, decisions, and actions by simulating our neural networks into our AI humanoids, with the same variations between individual humanoids as in humans.
  4. Competitiveness — It is natural for humans to compete as they’re driven instinctively to prefer their own bodies and genes as individuals and groups over that of others. They compete for space, resources, mates, income, etc. It is often combined with varying degrees of aggression. We’ll program this into the humanoids.
  5. Curiosity — Curiosity is vital to survivability in many life forms, especially humans. It helps discover dangers, valuable resources, and opportunities for survival, growth, and reproduction. We’ll build curiosity about the world, ideas, and others into our humanoids.
  6. Community and identity — Humans feel safer and happier with a sense of belonging and validation, and this comes from identifying with a group having similar characteristics or behaviours. These divisions are based on region, skin colour, language, faith, etc. Our humanoids will be given this trait of classifying and identifying themselves as different from others and giving and receiving signs and material support accordingly.
  7. Learning systems — Humans learn formally and informally almost throughout their lives. Humanoids will similarly be provided and programmed for learning, with similar retention levels.
  8. Work and recreation—Humans make a direct effort to grow food, build shelters, care for children, tend homes, build tools, trade, or do recreational activities. And today, they do these things indirectly in thousands of ways. This is a natural part of much of their lives. Humanoids also need to be given such occupations, pastimes, and health activities to be like us, so we will.
  9. Restricted lifetimes — How long humans live and their life’s trajectory are essential characteristics and part of their worldview. We will design humanoids to grow, peak, deteriorate, and die likewise.
  10. A varied and sufficient ecosystem — To make our humanoid species independent, we will provide it with a large and well-provisioned ecosystem replicating humanity’s. It will have all the climate and weather zones, soils, water bodies, minerals, geographical features, etc., that the human species has evolved in and makes use of.

Is that it? Should we rub our hands with satisfaction and expect a brilliant outcome? Let’s think a bit more.


Insurmountable constraints on AI-Human convergence

Let’s say we do everything above and tune the design doggedly until the humanoid species looks and behaves like us. But if we stop managing it, will it survive? If it does, will it follow our evolutionary trajectory, stay like us, or deviate?

Let’s see what more the Philosophy of Life Instinct tells us.

1. The limitation of time

Nature and evolution do not premeditate the form and design of life forms but induce them. Every life form, from the first primordial single-cell organism to us humans, results from innumerable trials within its surroundings. A species is the result of survival, situation, and time.

We can try replicating the surviving form and its situation, but we can do little about time.

To create a humanoid like us, we can’t go back in time to seed a simpler form and let it evolve into us.

Nor do we have the intention or patience to make a simple form now and wait thousands or millions of years for it to become like us. And we would’ve become something else by then anyway.

2. The constraint of natural mutations

Evolution is not purely based on reproductive genetic variation. Mutations also change genes within lifetimes in all species. These mutations can be harmful but are often beneficial, increasing survival within a generation and getting passed on to future ones.

Mutations are caused by errors in copying DNA strands for eggs and sperm, errors in the self-repair of damaged DNA, and external influences such as chemicals and radiation.

We can’t design microscopic cellular parts and mechanisms to make the same copying and repairing mistakes at the same rates (including unpredictable new ones). Nor can we replicate the impact of external factors on our AI humanoid’s coded equivalent of a genetic signature. Therefore, its traits will change faster, slower, or differently.

3. The issue of Free Will

Whether we have free will and how it might work are among the most complex philosophical and physics problems. Regarding the latter, the issue is how we can select alternative futures in a deterministic universe.

We will build our humanoid using physics, not some hazy occult voodoo. And if we don’t understand Free Will in physics, we can’t program it. So, our AI humanoids will not be free to choose like us, nor will they feel free like we do.

4. The problem of imperfections and unpredictability

Even an intentionally designed system can fail as the universe tends towards chaos, and there is an element of uncertainty and unpredictability at the quantum level. Humans are imperfect and unpredictable in sensing, cognition, thought, decision, and action at the macroscopic level, too, in mind and body. We also see such realities as suicide, madness, murder, cheating, etc. This is not surprising in a form that has emerged from the survival of the fittest to live and not the most moral or ethical and from a selection of designs that were not created with intent and oversight by a higher intelligence.

We cannot faithfully record and replicate every imperfection and unpredictable behaviour and their distribution among the human population and build them into the humanoid population.

5. The constraint of minimum representative population size

To make the humanoid ‘species’ as close to us as possible individually and as groups, we must make its population large enough. We have 8 billion people on the planet today. Assume that, statistically, a population of 800 million individuals is sufficient to make a physically and mentally accurate replica of the humanoid species. That’s a lot of humanoids to build! We couldn’t and wouldn’t.

So, we won’t have humanoids among whom all of us would find mirror images individually or as groups.


Conclusions

Given the above design and its limitations —

  1. We won’t be able to create an AI species identical to humans.
  2. Any self-sustaining AI species will continually evolve to be different from humans.

End Note

With AI, we are trying to play God. We may give birth to a population of artificial species benevolent or malevolent for us and other life forms and ecosystems on which we depend. Or we may create several AI species on a spectrum.

But one thing is sure — AI will never be the same as us. It may help us or wipe us out, but we’ll still have been different. And that’s something, for it’s freedom. It’s our right to be an original form of purposeless matter with the spark of life.

Only nature is God. The only God is nature. Artificial creations by nature’s creations will not be like nature’s natural creations. Nature is too vast, old, complex, changing, and unknowable for any life form, however intelligent, to recreate its creations.


(This essay will throw up many questions and challenges in your mind. You’ll want to assume other starting points, features, and endpoints. You may also have answers to the constraints I postulate. It would be best to consider all this, for it is philosophically and scientifically important. And I’d like to hear every thought, small or big.)


Connect with me!

SHARE IT!

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x