I couldn’t have stated this anymore elegantly myself. I was just about to write a similar blog post today on why there’s no reason to assume AI systems couldn’t be programmed to be conscious but decided first to google whether some such article had been posted. Basically, consciousness is just the sum of all our thoughts, sensations, emotions, memories, motor controls, etc., each of which are simple, in principle, to emulate with algorithms. There is no need to duplicate analog or biological elements or processes in silicon although that would also be a possible way to go, albeit not very parsimonious (Occam’s Razor).
I recently had a brief back-and-forth with Bobby Azarian about his new article on Raw Story. Azarian, a neuroscientist at George Mason University, argued thatartificial intelligence (AI) could never be conscious. I highly recommend reading Azarian’s article: it’s a great distillation of some key concepts in the philosophy of mind, and he makes an argument that is well worth considering. For the most part, I agree with Azarian’s reasoning regarding current A.I., but I don’t think his argument precludes the possibility of future A.I. being conscious.
First, a summary of Azarian’s key points:
- Computers are Turing Machines, which means they can perform operations on symbols but can’t recognize what those symbols mean (which requires a mind).
- Consciousness is a biological phenomenon, which is produced by processes very different from what happens inside a computer. While brains are in some sense “digital,” since information is carried by a neuron…
View original post 1,011 more words