Those of us with sight like to rely on what we see as a valid and accurate depiction of the world around us. Maybe you’ve heard yourself say things like, “I saw it with my own two eyes!” when you’ve wanted to convey your emphatic truth. Yet we know that eye witness accounts at the scene of a crime will vary in accuracy around size, shape, race and even gender. How is it that what we see directly in front of us can have such different connotations to each viewer? You may not like the answer.
Nothing we see is an accurate depiction of the reality in front of us, no matter how sure we think we are!
To understand this paradox better it’s helpful to consider what neuroscience calls the inverse problem, that is the neurological dilemma that information coming in through the eyes does not account for space, depth, or distance. The information we get through the eyes comes in the form of light, or photons, that reach the retina at the back of the eye. The retinal cells that process that information are processing light intensity, contrast, color and the angles of lines. What the retina gets zero information about is depth, distance, light source itself, and any other piece of information that we equate to a three-dimensional object.
Yes, we only have the retinal capacity to see in 2D, yet we use vision to navigate a 3D world.
This is an overhead view of the visual system in the brain.
Information coming in through the eyes gets transduced from photons to electrical current by the retina. This current moves through the optic nerves where parts of the information from each eye cross at the Optic Chiasm. The information continues to each side of the Thalamus, which acts like a relay station.
Here’s the amazing thing. At this point in the process we still have no idea how to interpret the visual input we’ve received. The image of the outside world does not yet register to us as something we can see.
The information must continue on through another loop until it reaches the Occipital Lobe in the back of the brain; in particular it must reach the visual cortex.
You must be thinking that finally, now we can interpret what we’ve seen.
The answer is still no.
At this point we may have neural excitation around color, line angles, and even direction of motion but we still don’t have the ability to say what we’ve seen or guess at where it is in space. The what and where of vision happens when this information begins its journey from the visual cortex to two other parts of the brain in the Temporal Lobes and Parietal Lobes.
Finally, we bring this visual stimuli to a place in the brain that can take 2D information and interpret it back into a 3D guestimate of what’s in front of us. Literally, we make up an interpretation of distance, having used information about object size to make that determination, and we make up a concept of depth from information about contrast.
We happen to be very good as a species at interpreting these things within an unspoken range of collectively agreed-upon rules. But we are making it up all the same.
It is only when these agreed-upon rules get challenged that we realize our fallibility and admit to being caught in an illusion…yet the illusion is all the time.
In both cases we’ve made up information about the presence of lines that are not actually there, or the perception of motion which actually doesn’t exist. The truth is we are making up this information in each and every moment, but when it subscribes to our collective agreements we don’t question it. Instead, we miraculously use our eyes to navigate a world we have far less information about than we think we do.
What you see is actually being created inside your brain and may not bear any real resemblance to the world in which you live. If you add in to this complex system all of the many beliefs we hold about what we expect to see in any given situation, you may begin to realize the scale of our grand illusions. If we’re in the woods was the shadow we saw up ahead really a bear? Was the perpetrator really someone of that ethnicity or this gender? Did you really see her kiss him in the car?
You get the point.
While it seems like our adaptation to the inverse problem has been quite successful, we are actually well suited to live in a 3D world using a 2D system of visual information gathering, one has to wonder what more we could see if we occasionally stepped outside of our expectations and opened to the idea that there’s more going on in life than we actually know.