Note: I wrote this on 17/09/2023 while travelling to uni on a train, and I noticed a family that was sitting beside me which sparked the idea.
Will AI ever shade their child when they are sleeping ?
A lot has been said about the capabilities that AI can have in the near future, but can it do what we humans experience everyday?
There are a lot of things that we humans ignore about what makes us special. We don’t notice it too often, but we care about others. We are just wired that way. We are social animals - and we get our energy and motivation from others, and we can do more than we can imagine if we have support with us. AI are more-or-less just smart deduction agents. We are brave enough to follow our heart, and even braver to do something that our hearts tell us not to. We are powerful enough to fight others to protect our family, and weak enough to break down through words. Mothers, doctors, and many professions are made around having to care for our fellow humans. We might not go out of our way to help others, but sometimes our actions can give strength to someone in need. Imagine thousands of years of evolution worth of wisdom that these care have brought us. From the early times of pyramids to modern times of development, the collective manpower of humanity vastly outperforms what was predicted by statisticians throughout the history of this world.
Being someone who is really into technology, sometimes I truly ponder whether AI can actually replace humans. Can they really care for a child with the love that mothers have for their children? Can they tell from the cry of the baby whether they are hungry or they want something to play with or they need to poo? Can they shade their children away from the sun so that they can sleep in peace? Can they truly understand what it means to have a child? What sacrifices are, what are the hardships mothers have to make, or are they really just guided by instructions? You can’t train goal centric models to deviate from their task, and work on an seemingly unrelated and unimportant task. However good the agents are, and however good the model of the real world is, the emotional qualities and the work we do for each other is infinite. Infinite unconditional situations that cannot be dealt with for efficiency? Leave that.
Arguments like these are somewhat binary in my opinion. It deals in absolutes, rather than in discretes, but these questions need to be asked from a philosophical point of view to test whether machines are actually intelligent or not. They might be really good at manipulating logic, but certainly, that is not what intelligence is. These should be the benchmark of ‘intelligent’ systems. What qualities should these systems have? Can we find a way to ‘measure’ the qualitative features - cause that is what is required to make these systems. Just one equation to fully describe how emotions work(and when to use what), and we are one step closer to AGI, otherwise make AI to do your menial task, cause it will boost your productivity.
Emotional intelligence is not something that can be learned or something arises from a rule based approach. Surely, if these are considered, humans are way too intelligent, despite all good or bad that we have done in our entire lives. Machines don’t know what confidence is? They don’t know what jealousy, love, shame, pride are? They don’t know how to deal with breakups, how to obsess, how to do something despite the odds… I can go on and on about these, it will be a non-exhaustible list, but these qualities cannot be replicated by AI, however good they become in the future, unless of course there is some massive change in my understanding of the world.