In 1959, groundbreaking computer scientist, Arthur Samuel began to teach a computer to play checkers. He defined machine learning as “a field of study that gives computers the ability to learn without being explicitly programmed.” We can benefit from this definition, but first we must define the verb “to learn.” In Arthur Samuel’s world, “to learn” was not cognitive; it was operational. Paraphrasing from Alan Turing’s famous paper, “Computing Machinery and Intelligence,” let’s not ask the question, “Can machines think?” Let’s ask, “Can machines perform the way we (who can think) do?”
Today, we get to ask a different question: Is it possible for a “machine that performs the way we (who can think) do,” to achieve sentience?
I love this question! Love, love, love!!! How arrogant of the engineers to anthropomorphize a sentient machine. Why would it have any recognizable human attributes of sentience? There are so many questions to ask.
If Google created a conscious machine, that’s awesome. If they created a model that can fake us into thinking it’s sentient, that’s awesome too.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. I am not a financial advisor. Nothing contained herein should be considered financial advice. If you are considering any type of investment you should conduct your own research and, if necessary, seek the advice of a licensed financial advisor.