What is the mind? Is it simply a collective sum of networked neural impulses? Is it less or more than that? Where does it begin and where does it end? What is its purpose? Is it the soul? These are questions that have haunted human consciousness for much of its existence. But in this increasingly digital age, we gain exciting new insight into the nature of consciousness by artificially simulating it.
Artificial intelligence is somewhat loosely defined, but can generally be understood as a subset of another field called biomimetics. This science (interchangeably referred to as “biomimicry”) imitates natural processes within technological systems, using nature as a model for artificial innovation. In nature, evolution rewards beneficial traits by proliferating them throughout the natural ecosystem, and technology shares similar tendencies, in that the technology that yields the most useful results is that which thrives.
As machines develop the ability to learn, compute and act with a level of creativity and individual agency that is virtually human, we as people are confronted with increasingly complex but imminent questions surrounding the nature of AI and its role in our future. But before we delve too deeply into the semantics of artificial intelligence, let’s first examine three ways in which it is already beginning to manifest in our world.
Human perception is like a set of input devices on a computer. Visual data hits the human retina and then flows through the optic nerve to the brain. Sound waves hit the outer and then middle ear before the inner ear begins the neuronal encoding process. Touch, smell and taste similarly transform external stimuli to internal neurological activity. And our memory serves as a database within which this sensory information can be cross-referenced, identified and put to use.
The computer reflects human anatomy in its configuration of input, transduction and storage. Cloud technology has evolved into a sort of collective consciousness that stores, vets and distributes shared knowledge and ideas. Image and sound recognition software use camera and microphone hardware to input and cross-reference data with the cloud, in turn outputting an explanation to the user of that which was seen or heard. Recognition apps like CamFind and Shazam basically serve as sensory search engines, while the fields of robotics and automated transportation build machines that use recognition technology to navigate and act within the world with unprecedented independence. (For more on AI’s attempts to become more human, see Will Computers Be Able to Imitate the Human Brain?)
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) has served as one of the most effective validation tools in internet security for many years now. It is well known for blocking automated password breaches with a challenge-response interface that has for long only been human-recognizable. However, a team known as Vicarious has managed to develop a breach for the software using a program that simulates human thought process. The node-based software assesses the CAPTCHA image in stages, and like a human mind, is able to break elements of the image into components that are compared with language characters in a database. CAPTCHA has long been emblematic of the difference between machine intelligence and human intelligence. But with Vicarious’s new innovation, the line between the two is being blurred.