Main Logo

Sunday 16 April 2023

බුද්ධිමතුනට

 

බුද්ධිමතුනට

 

 මේ බුද්ධියේ තවත් කතාවක්. අපි ඊනියා වාස්තවික ලෝකයක් ගැන හිතන එක නවත්තන්නේ නැහැ.

 

2023 අප්‍රේල් 15  New Scientist සඟරාවෙන්

 

Physicist David Wolpert on how to study concepts beyond imagination

There is probably a plane of knowledge beyond the grasp of human minds. But mathematician and physicist David Wolpert says it is still possible to explore this unimaginable realm

By Abigail Beall

 

I LIKE to think my dogs experience the world in the same way I do. They seem excited when I ask if they want to go to the beach – their favourite place – and when I ask if they are hungry, they reply with a delighted tail wag. But until recently, I hadn’t paused to wonder whether a dog can even understand the concept of a question.

I was prompted to ponder this after a conversation with David Wolpert, who is based at the Santa Fe Institute in New Mexico. Wolpert is a polymath who has turned his hand to many disciplines over the years, including artificial intelligence, physics, economics and computer science. Recently, he has taken a deep dive into questions of intelligence and its outer limits.

In a video call, with my dogs sitting behind me, Wolpert talked me through some of his latest work, which centres on the nature of human intelligence. He points out that, just as dogs and other animals lack our appreciation for logic, there are inherent limits to human knowledge too. Wolpert isn’t the first to talk about unknown unknowns, but, in a recent paper, he approached this topic from a different angle: trying to pinpoint weaknesses in our reasoning so as to identify some of the things we can’t know.

It is both exhilarating and decidedly tricky territory to navigate, but Wolpert says these are some of the most profound questions in science. We have to consider them now, he says, because the artificial intelligence revolution is upon us – and machines may not have the cognitive limits that we do.

Abigail Beall: You recently produced an essay arguing that there must be something lacking in our ability to understand the universe. What prompted you to write it?

David Wolpert: It started when I was involved in an event at the Santa Fe Institute, where I work. There was a presentation looking at how it is that our brains, which were designed for operating in the savannah, can also be good at other things. This is a very old question. I was asked to write an essay responding to this puzzle. As I was considering my response, I kept running into a different, more fundamental question.

Consider the collection of all cognitions, thoughts or perceptions that are beyond the ability of any human to have – what will we ever be able to say about those? For example, can we even know if they exist? If so, can we say whether the number of cognitions or thoughts that we cannot have is larger than the set of cognitions or thoughts that we can have?

I wrestled with the title for that essay and eventually called it: “What can we know about that which we cannot even imagine?” But I still don’t like that because “what we cannot imagine” is almost by definition ineffable.

 

Are we as clueless as a paramecium when it comes to certain questions?

What conclusions do you reach about the limits of human thought?

I present examples of how, responding to the original question about the evolution of human brains, one runs into that second, more fundamental question about things we can’t imagine. Then, in addressing the second question, I conclude that there are vastly more possible cognitions than can ever be formulated in human language. This set of cognitions, by definition, cannot be formulated, never mind investigated.

So, there must be some way in which our ability to understand the universe around us fails.

I’ve long been adamant that we must always be on the lookout for inadvertently assuming that human thought is fundamentally privileged in some way. We already know that we are bad at making judgements about ourselves. For example, studies show that people tend to rate themselves as being better at driving than average. This can’t be true for everyone, but we all believe it. This problem of implicit self-aggrandisement extends far beyond that, though. It is also deeply wrapped into the foundations of all analysis by academics.

How do we avoid falling into this trap?

Think about a paramecium, a very simple, single-celled organism. It’s not just that it wouldn’t understand any questions I ask – it wouldn’t understand the concept of a question. We need to put ourselves on the other side and say: we are the paramecium, what is beyond us?

Nicolaus Copernicus – the Renaissance astronomer who was the first to argue that Earth orbited the sun – is my religion, in a certain sense. I’m always on the lookout for how we might be completely wrong and aggrandising ourselves, just like we did when we thought that Earth was the centre of the universe. It struck me that language is a great example of such self-aggrandising. Maybe, by investigating the limitations built into human language, we can avoid falling into at least this instance of the trap.

In what way does language limit us?

What distinguishes us from the dogs behind your left shoulder? Communication. We tend to think of language as one of our great achievements as a species. But when you think about it, language is just finite strings of a finite set of symbols – that’s what sentences are – and as a mathematical structure, that is extraordinarily restricted.

This applies to maths too, by the way. Many people have said that the world is even more mysterious than we can think, but they’ve not really tried to quantify that. When I tried to do this, I found that mathematics is an extremely rich structure that actually points at types of maths that it itself says it cannot describe.

So, we know there are levels of intelligence that are currently unfathomable to us. Are they permanently off-limits?

I find a very low likelihood that, by the end of this century, the human genome will be the only storage system for intelligent life on Earth. I think CRISPR gene-editing technology means scientists are going to make babies who don’t need sleep, who have IQs of 200, who have gills – whatever you can imagine. Some religions are going to be deeply challenged by it, there will probably be wars about it. But CRISPR is out of the bottle – our DNA is going to be mucked around with. There will be disasters and mistakes at the start, but eventually they’re going to come up with something new that will be self-propagating. So I don’t think humans as they are currently configured are where intelligent life stops on Earth.

What about intelligent machines?

One of my previous lives was in machine learning. Everybody is now debating whether there will be a single, monolithic artificial general intelligence that can reach our level. Would it understand things in the way that we do? The measure is always us.

I think the premise here is completely mistaken. The question is not “will these intelligences be able to emulate us and just add a little more?”, since such machines will be doing things differently. The question is: “what will their actual capabilities be?”

Everybody seems to have forgotten the mantra that a lot of power comes from distributed systems, and this is key in thinking about what the capabilities of intelligent machines will be.

 

What are distributed systems and why are they interesting?

Think of the “flash crash” of 2010, an event in which stock markets fell by trillions of dollars in minutes before recovering most of their value in about half an hour. It was caused by a lot of bots that do automated trades. On their own, these bots are based on simple if/then programs, but they somehow interacted collectively to suddenly cause the entire market to nosedive. The market slowly crawled its way back to where it was, so this wasn’t like the much more protracted economic downturn that began in 2007. But to this day, nobody can understand what went on. No new regulations have been put in place to try to prevent a repeat, because nobody knows exactly why it happened. It is described in the scientific literature as having been like some kind of alien ecosystem that we don’t understand.

Now, imagine something like that, but with artificial intelligence systems like AlphaGo that are practising and learning across the whole web. What sort of vastly more complicated versions of the flash crash ensue when the bots are replaced by these kinds of machines? It’s not going to be some human-like intelligence any more, it’s going to be different. It’s hard not to believe that, in some ways, it will be vastly more powerful.

Between CRISPR and distributed, interacting AIs, I can’t imagine that, by the year 2100, we will still be “the most intelligent creatures on Earth”. Our progeny will be here.

Let’s imagine those kinds of intelligences have arrived. Would they ever be able to explain how they think to us?

My whole shtick in all of this is that I can’t conceive of how that could happen. I mean, it might be that what emerges will be a type of intelligence coming from something I haven’t thought of. People talk about the wood wide web, all trees talking to one another. It might be that somehow we will gain the ability to converse with the trees or with, say, cetaceans or something – who knows, I’m just riffing now – and maybe they can actually provide us with an answer using means that I can’t even conceive of.

It might turn out that tree roots can tickle our toes while the cetaceans blow bubbles in particular patterns in a swimming pool right next to us and somehow we have a deep epiphany. Maybe it’ll be a neuro-prosthesis. There could be other ways. I can’t conceive of them though. I can’t see how I could provide such details to the paramecium – but again, that’s the whole point.

Abigail Beall is a features editor at New Scientist