Conversation with a futurist: How intelligent is AI?

Then out in the vast greenery of Princeton, I meet a futurist.

As a cinephile with a particular affinity for theatrical depictions of possible futures, I often travel with the sounds of the impeccably composed Interstellarsoundtrack and M83’s cosmic tunes echoing through my bluetooth headphones. Earlier this month, I land in New Jersey with excitement about what this trip may bring me. Then out in the vast greenery of Princeton, I meet a futurist.

When Jonathan Peck tells me that he is the president and senior futurist at the Institute for Alternative Futures (IAF), almost immediately I plant the seed that we need to talk more so that I may learn about his experience.

When catching up with Jonathan weeks later to further the discussions we had in Princeton, I ask him to share how he defines the term ‘futurist.’ He replies, “From my perspective [a futurist] is somebody who thinks about change and what it will mean – how we can actually alter our futures so that they’re better than what they were going to be if we hadn’t thought about what is going to change.” In essence, a futurist is someone whose thinking is towards understanding what upcoming experiences may happen and explores how these experiences may be changed in some way.

Like many other labels, futurist also functions as a title that is sometimes self-identifiable and in other times the word is used to describe how others perceive someone else’s work – like that of entrepreneur Elon Musk, whom as far as I could tell, does not use the label for himself. Yet, if one were to google Elon Musk’s name, it may be easy to find blogs referring to him as a futurist simply because the nature of his project development – such as electric semi trucks – is future focused.

Jonathan Peck, on the other hand, lovingly identifies himself as a futurist. It is the nature of his work and a calling he felt drawn to upon discovering the Future Studies graduate program at the University of Hawaii. Though, Jonathan’s quest to become a futurist was not without ambivalence.

“It was like love at first sight. But I felt a little conflicted. I had no idea if I could make a living calling myself a futurist,” Jonathan shares. Luckily, his wife gave him the nudge he needed to pursue his passion professionally. “My wife said you’ve got to find something where you love what you’re doing,” he continues.

After growing to understand what a futurist is and how one may make it a profession, I begin firing off questions to Jonathan about a rather inquisitive subject for me – artificial intelligence (AI).

Jonathan’s response drops a bomb on me:

“I think AI is much more artificial than it is intelligence. The field itself was developed with the mindset of electrical engineers and they did not study very deeply the nature of intelligence in humans.”

Jonathan’s statement resonates with me. Given that much of what humans discern to be intelligence is based on the experience and study of the human mind – sometimes in comparison to that of other organic creatures – what does it mean if the inception of AI may be developed, as Peck points out, through the perspective of programmers whose research is set by conditions of silicon microchips rather than carbon-based beings?

Is ‘intelligent’ a misnomer for AI products – a convenient name for marketing and business development like the so-called ‘hoverboard’ trend? Or is the nature and very definition of intelligence changing? It wouldn’t be the first time.

A book I so happen to possess comes up in my discussion with Jonathan, Frames of Mind: The Theory of Multiple Intelligences by Howard Gardner. Gardner is a developmental psychologist and the John H. and Elisabeth A. Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education at Harvard University. He is well-known for reviving the theory of multiple intelligences. In his book, Gardner finds IQ tests to be limiting and flawed as they “are definitely skewed in favor of individuals in societies with schooling and particularly in favor of individuals who are accustomed to taking paper-and-pencil tests, featuring clearly delineated answers.” Gardner’s research suggests the human mind has potential to develop many intelligences through comprehending various concepts. His book identifies and explores several human intelligences including music, visual-spatial, linguistics, logical-mathematical, personal, and bodily-kinesthetic.

Gardner’s work challenges traditional ideas around intelligence and fundamentally the notion of intelligence itself by supporting the expansion of its definition beyond singularity. Though his research is human-focused and therefore done within the conditions of carbon-based beings, it signals that intelligence is an evolving concept with room to grow. In Peck’s opinion, this sort of fundamental research Gardener does to understand how intelligence is defined, observed and developed in the human mind must be central to AI research if researchers intend to build intelligent machines.

And alas, there is Jeff Hawkins. Hawkins, Jonathan Peck says, is in fact an electrical engineer who feverishly does explore the nature of intelligence in the human mind. Jeff Hawkins is largely known as the founder of Palm and Handspring, producers of some of the United States’ first widely accessible digital personal assistants. After graduating from Cornell University in 1979, Hawkins worked for Intel only to realize that he wanted to learn more deeply about human brains.

Jeff Hawkins later found himself shut out of AI research at MIT for his desire to build intelligent machines by first studying human brains. He then went on to study brains through pursuing a biophysics PhD at University of California, Berkeley, only to have his research rejected because he wanted to take a theoretical approach. After being told his theoretical research on human brains could not get funding in his biophysics PhD program, Hawkins went back to work in the computer industry.

His industry and financial success with Palm and Handspring put Hawkins in the position to found a non-profit research institute called Redwood Neuroscience Institute in 2002 – which later became the Redwood Center for Theoretical Neuroscience after it was gifted to University of California, Berkeley. Hawkins then opened Numenta in 2005, a company which believes “studying how the brain works helps us understand the principles of intelligence and build machines that work on the same principles.”

In a 2017 Forbes article titled 12 AI Quotes Everyone Should Read, this rather compelling quote from Jeff Hawkins states:

“AI scientists tried to program computers to act like humans without first understanding what intelligence is and what it means to understand. They left out the most important part of building intelligent machines, the intelligence … before we attempt to build intelligent machines we have to first understand how the brain thinks, and there is nothing artificial about that.”

And with that, though my conversation with futurist Jonathan Peck fundamentally changes the way I view AI research and product development, learning about the arduous but brilliant path of Jeff Hawkins left me with hope after all. Hope that there is some intentional and critical thought given to the development of AI technology within the logic of what we understand to be intelligence rather than some sort of free for all in the programming of machines and simply calling it ‘intelligence.’ Where the thought given to human intelligence in the development of intelligent machines leads, it is certain to be a future we shall see.


Sam Jenkins
EditorSam Jenkins
New Comment
New Comment
New Comment
  • 1
Steven Singer
EditorSteven Singer
New Comment
New Comment
Pat Greer
EditorPat Greer
New Comment