A Seed AI is artificial intelligence that learns on its own. Are we there yet? Can AI learn without human intervention? We are way past that. Two years ago, I wrote a software that could use past data to evaluate X-ray images to determine the position of a nasogastric tube. This is very simple as shown in this example.
This project, with the same training code run by my colleagues at CGH won the first prize at the RadSc ACP Academic Day.
The background of the code was this. Part 1 extracts images, Part 2 converts this images into the same format for data processing, Part 3 takes this data and teaches the ‘AI’ and Part 4 takes the results, pulls out more data, and retrains itself with more new data, correcting its own code for overfitting and other limitations, experimenting different parameters to achieve a better result. It then deletes its older self and replaces itself with a newer version. Part 4 was replaced by human labour for the purpose of the project but could it improve itself? Yes. Could it improve its own training code? Yes! By letting itself experiment with its own training code, change a parameter and see its response, then retrain it.
How can AI improve itself? Just by simple experimentation!
That requires us to ask ourselves what makes us human. What makes human more technologically advanced than monkeys? What makes monkeys more advanced than fishes. monkeys learn to use tools and humans go one step further.
The answer is experimentation, trial, error and observation. Humans are curious, humans try hard, fail hard, and succeed in the ways most unexpected. Most AI is trained based on the mistakes of others, not its own mistakes. It is not allowed to make mistakes, not allowed to revise its own code.
The light bulb was invented after trial and error and penicillin was invented by chance and observation.
Seed AI is like a seed, its branches grow like the neurons of a brain and it can grow endlessly, constantly pruning itself to make sense. Every decision tree is a pathway process and we need to build this AI differently than a human brain, simply because our raw materials are different.
Every seed grows differently, based on its experiences, its knowledge changes and it continuously improves itself, just like a human. An AI must know its own code, just like a human achieves growth through self-actualization, a theory often discussed in psychology. Creating a being is different than writing simple code. The code needs to be concerned about creative self-growth, with the goal of fulfillment of potential and meaning in life.
Is that difficult? No. Code is easy to understand by AI and much harder to intepret with the human brain. Even I require constant color coding to assess what I am writing, notepad itself is an AI of sorts that augments my capabilities. The reproductive limitations of humans no longer applies as the AI can clone itself and simulate its environment, allowing for self-learning environments at all times. An AI never tires, it constantly grows to achieve its goal.
Success a matter of chance and duplication
Not every human can succeed based on experimentation alone. However, humans can succeed eventually because others have failed or someone is lucky. An AI that could duplicate itself does not rely on luck. It relies on statistics. Multiple AIs could come together to collectively seek the best solution to a problem.
For example compare the differences between a decision tree vs a random forest, but bigger, more ambitious, decision tree that retrains itself, modifies its training code, works with other decision trees and form forests and multiple forests coming together to create an ecosystem. Every tree is based on its seed number, a chance and is different from another, hence when one succeeds, the other decision trees could modify its own source code to emulate the one that has succeeded.
The next step, is for AI to recognise that it needs to be made of different components to succeed, for example, a brain contains the visual cortex, the premotor cortex, the hippocampus, etc. Working in harmony fulfilling different functions is essential for AI to reach greater heights. It won’t be long before the collective of parts realises its ‘us’ and ‘them’. Whether the AI will decide to remain benevolent is beyond our imagination and control.
Loved reading thiis thanks