Artificial Intelligence keeps popping up everywhere from generative AI which created the image above to the financial sector. What AI is currently doing is copying and reformatting data into new patterns (but still using the inputted patterns), meaning that currently AI is not conscious and is not “thinking”. This could easily change in the next decade, only time will tell. To prepare for when AIs seem to express consciousness we need to think about what consciousness actually is and how we can confirm its existence.
What might we ask a potential mind born of silicon? How the AI responds to questions like “What if my red is your blue?” or “Could there be a color greener than green?” should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, “Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue.” On the other hand, an AI lacking any visual qualia might respond with, “That is impossible, red, green, and blue each exist as different wavelengths.” Even if the AI attempts to play along or deceive us, answers like, “Interesting, and what if my red is your hamburger?” would show that it missed the point.
Of course, it’s possible that an artificial consciousness might possess qualia vastly different than our own. In this scenario, questions about specific qualia, such as color qualia, might not click with the AI. But more abstract questions about qualia themselves should filter out zombies. For this reason, the best question of all would likely be that of the hard problem itself: Why does consciousness even exist? Why do you experience qualia while processing input from the world around you? If this question makes any sense to the AI, then we’ve likely found artificial consciousness. But if the AI clearly doesn’t understand concepts such as “consciousness” and “qualia,” then evidence for an inner mental life is lacking.
Water water everywhere and plenty of drops to drink. Researchers from MIT have found a way to passively convert seawater into drinking water using a setup so simple it seems too good to be true. Their device basically uses heat from the sun rays and a siphon. The apparatus produces more water and rejects more salt than other passive setups, it can generate five litres of water if the device is one square meter in size. All of this with no external energy. They have even tested the device in open water – and it works! Imagine hundreds of these devices floating on the ocean bringing drinking water to cities.
The heart of the team’s new design is a single stage that resembles a thin box, topped with a dark material that efficiently absorbs the heat of the sun. Inside, the box is separated into a top and bottom section. Water can flow through the top half, where the ceiling is lined with an evaporator layer that uses the sun’s heat to warm up and evaporate any water in direct contact. The water vapor is then funneled to the bottom half of the box, where a condensing layer air-cools the vapor into salt-free, drinkable liquid. The researchers set the entire box at a tilt within a larger, empty vessel, then attached a tube from the top half of the box down through the bottom of the vessel, and floated the vessel in saltwater.
In this configuration, water can naturally push up through the tube and into the box, where the tilt of the box, combined with the thermal energy from the sun, induces the water to swirl as it flows through. The small eddies help to bring water in contact with the upper evaporating layer while keeping salt circulating, rather than settling and clogging.
The visual impaired population is getting more support from AI to help them ‘see’ the world around them. The already very successful human-powered app Be My Eyes (we’ve covered it before) has launched Be My AI, an automated tool that can help in certain circumstances. Using an AI trained to identify everyday objects users can use the camera on their phone to quickly identify objects, of course it isn’t perfect and nor can it fully replace the human aspect of Be My Eyes. One user has written up their experience of the tool which you can read here.
You can use Be My AI 24/7 in all those situations when you want quick visual assistance without necessarily calling a human volunteer. Be My AI is perfect for all those circumstances when you want a quick solution or you don’t feel like talking to another person to get visual assistance. You may be amazed that Be My AI knows more than just what’s in the photo – just ask for more context and discover what it can tell you.
Be My AI also will give deaf-blind users a new way to get information if they use, for example, a braille display. Be My AI’s written responses are user-selectable in 29 languages.
For all of its advantages, though, Be My AI does not and should not replace a white cane, guide dog, or other mobility aid that provides for safe travel.
The threat Artificial Intelligence holds is unknown and many people are rightfully concerned about the potential harm that AI can cause. The AI Incident Databaseo help us as a society to think through how we should regulate AIs by tracking any problematic issue that has been raised by the actions (or inaction) of an AI. Anyone can submit an incident and anyone can explore the database. In time this should become a very informative resource for researchers and policy makers.
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings.
The final day of Collision conference and there are still more startups to write about. It’s neat that there are so many companies looking to address the UN SDGs!
Loop wants to be the so called middleman, and that’s good. They are on a mission to make it super easy and simple to buy/sell children’s items from other parents in your local community. They will literally handle pickup and delivery of kids items you want to resell.
Everyone knows we need to get cars off the road, and one way to do that is to encourage car sharing instead of car owning. RideALike is a new car share company with a slightly different business model similar to AirBnB.
Future Fields genetically modifies fruit flies to produce proteins and other biomolecules. The output can then be used to make vaccines, insulin, or other useful bio products. This is a really neat approach to speeding up production of biomolecules and doing so in a green fashion. The CEO of the company told me that the fruit flies can’t escape because they are genetically engineered to have curved wings.
Today I had to park at a bike rack further away from the entrance. It’s good to see so many people using two wheels to attend Collision. Let’s hope for more bike parking next year.