Artificial Intelligence keeps popping up everywhere from generative AI which created the image above to the financial sector. What AI is currently doing is copying and reformatting data into new patterns (but still using the inputted patterns), meaning that currently AI is not conscious and is not “thinking”. This could easily change in the next decade, only time will tell. To prepare for when AIs seem to express consciousness we need to think about what consciousness actually is and how we can confirm its existence.
What might we ask a potential mind born of silicon? How the AI responds to questions like “What if my red is your blue?” or “Could there be a color greener than green?” should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, “Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue.” On the other hand, an AI lacking any visual qualia might respond with, “That is impossible, red, green, and blue each exist as different wavelengths.” Even if the AI attempts to play along or deceive us, answers like, “Interesting, and what if my red is your hamburger?” would show that it missed the point.
Of course, it’s possible that an artificial consciousness might possess qualia vastly different than our own. In this scenario, questions about specific qualia, such as color qualia, might not click with the AI. But more abstract questions about qualia themselves should filter out zombies. For this reason, the best question of all would likely be that of the hard problem itself: Why does consciousness even exist? Why do you experience qualia while processing input from the world around you? If this question makes any sense to the AI, then we’ve likely found artificial consciousness. But if the AI clearly doesn’t understand concepts such as “consciousness” and “qualia,” then evidence for an inner mental life is lacking.
How we engage the world around literally shapes how our brains work; therefore changing how we interact with reality can alter how we think. As humans we create pathways in our brains both physically and mentally which gives us our heuristic approach to the world around us. We can alter our heuristics by exposing ourselves to new and unexpected situations. The more unique experiences we have the more connected our brains are; which means that we can process information in more interesting ways.
Right now, I need to put some coffee into my brain to get it working 😉
At its core, a thinking pattern is an implicit rule of thumb for the way we connect aspects of our reality. Given the complexity of this reality, the more diverse our trained thinking patterns areâ€Šâ€”â€Šand the better refined the associated triggers areâ€Šâ€”â€Šthe more accurately we will be able to interact with information around us.
Because thinking patterns emerge from the mental habit loops we form as a response to experience, the only way to diversify them is to seek out new and conflicting encounters. We can do this through books, unfamiliar environments, or even hypothetical thought games.
James Burke is known for his series on BBC called Connections, which was all about how seemingly random inventions (or concepts) are actually connected in interesting ways. He has spent his life advocating for people to look at the in-between of industries and fields of research because it is there that we find true innovation.
In our modern era we find that we can create our own filter bubble (which is a big issue with the recent election in the USA) which can make finding connections a problem. Burke’s solution to this is to Kickstarer an app that uses his own specially designed database and cross-references it with Wikipedia in order to help you break out of your bubble and discover cool new connections!
You may have noticed that when we browse the news or type into Google we tend to seek confirmation more than we do information. We predict our current model will remain untarnished. When we want to make sense of something, we tend to develop a hypothesis just like any scientist would, but when we check to see if we are correct, we often stop once we find confirmation of our hunches or feel as though we understand. Without training, we avoid epiphany by avoiding the null hypothesis and the disconfirmation it threatens should it turn out to be valid.
Since the 1970s, Burke has predicted we would need better tools than just search alone if we were to break out of this way of thinking. His new app aims to do that by searching Wikipedia â€œconnectivelyâ€ and producing something the normal internet searches often do not â€“ surprises, anomalies, and unexpected results.
Some new research hints that you should change how you think about issues and what to do. Often we think about things we should do and what we ought to do is ask what we could do.
Asking yourself, for example, â€œWhat should I do with my life?â€ tacitly implies that thereâ€™s a right and a wrong answer to that question. It seems that the word should can cause us to think in black and white, while could reveals the in-between shades of gray. â€œWhat initially seems like a problem that involves competing and incompatible principles turned out to be a problem that can be solved when we approach it with a â€˜couldâ€™ mind-set,â€ Francesca Gino, one of the paperâ€™s co-authors and author of Sidetracked: Why Our Decisions Get Derailed, and How We Can Stick to the Plan, said in an email to Science of Us.
There exists a Center for Applied Rationality (as opposed to unapplied rationality?) and they have set out to make the world more rational. Their approach is questionable, as it’s unclear as to what form of rationality they are openly proselytizing. Regardless, they do have a neat checklist to help people work through problems and see debates in a more thoughtful way.
Here’s the first item on their rationality checklist:
Reacting to evidence / surprises / arguments you havenâ€™t heard before; flagging beliefs for examination.
When I see something odd â€“ something that doesnâ€™t fit with what Iâ€™d ordinarily expect, given my other beliefs â€“ I successfully notice, promote it to conscious attention and think â€œI notice that I am confusedâ€ or some equivalent thereof.
When somebody says something that isnâ€™t quite clear enough for me to visualize, I notice this and ask for examples.
I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode.
I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration.
I consciously attempt to welcome bad news, or at least not push it away.