
Whether it is the virtual assistants in our phones, chat stations provide customer service Banks and Clothes storeOr tools Like ChatGPT and Claude made work burdens lighter, artificial intelligence has become part of our daily life. We tend to assume that our robots are nothing but machines – that they have no spontaneous or original thinking, and certainly there are no feelings. It seems ridiculous to imagine otherwise. But recently, this is exactly what experts in artificial intelligence are asking us to do.
Eleos AI, a non-profit organization dedicated to exploring the possibilities of artificial intelligence-or the ability to feel-lucid, issued a a report In October, in partnership with the New York University Center for Reason, Ethics and Politics, entitled “The well -being of artificial intelligence takes seriously.” In it, they assert that Amnesty International, which is achieving science, is something that can really happen in the non-far-long future-about a decade from now. Therefore, they argue, we have a moral necessity to start thinking seriously about the welfare of these entities.
I agree with them. It is clear to me the report that unlike rocks or river, artificial intelligence systems will soon have certain features that make consciousness within them more likely – capabilities such as perception, attention, learning, memory and planning.
However, I also understand Suspicion. The idea of any inorganic entity has a personal experience is laughing at many because consciousness believes that it is exclusive to carbon -based organisms. But as the authors of the report note, this is the belief more than a fact that can show – just one type of conscious theory. Some theories indicate that biological materials are required, while others mean that they are not, and we have no way to know what is true. The truth is that the emergence of awareness may depend on the structure and organization of the system, and not on its specific chemical composition.
The primary concept is on hand in talks on artificial intelligence is a classic concept in the field of moral philosophy: the idea of ””Ethical circle“Description of the types of beings we give to them as a moral consideration. The idea has been used to describe those who care about a person or society, or, at least, who should be interested in it. Historically, humans were only included, but over time, there were many societies that excite some animals, and they wandered in the circle.
Many philosophers and organizations devoted to studying artificial intelligence from the field of animal studies come, and they are mainly arguing in extending the thinking line to inorganic entities, including computer programs. If this is a realistic possibility that something becomes a person suffering, it will be morally important not to put some serious consideration of how to avoid this pain.
The dilated ethical circle requires ethical consistency and makes it difficult to determine exceptions based on cultural or personal biases. Now, only those biases that allow us to ignore possibility From artificial intelligence. If we are morally consistent, and we care to reduce suffering, this care should extend to many other organisms – including Insectsand Microbes Perhaps something in our future computers.
Even if there is a little opportunity for artificial intelligence to develop feelings, there is a lot of these.Digital animalsThere are the effects of it are huge. If every phone, laptop, virtual assistant, etc. One day he has his own experience, there can be trillions of entities that are exposed to pain in the hands of humans, all while many of us work on the assumption that this is not possible in the first place. Simply cannot be tried Things deeply like you or.
For all these reasons, technology companies such as Openai and Google should start taking the potential luxury of their creativity seriously. This can mean Appointment of a researcher in artificial intelligence care and Developing frameworks to estimate the possibility of resolving their creativity. If artificial intelligence systems have evolved and have a level of awareness, research will determine whether their needs and priorities are similar or different from those in humans and animals, and this will reach how our methods should appear to protect them.
Perhaps a point will come in the future where we have widely acceptable evidence that robots can actually think and feel. But if we wait even to entertain the idea, imagine all the suffering that will happen in the meantime. Currently, with Amnesty International at a promising stage but still emerging, we have the opportunity to prevent potential ethical problems before you get more estuary. Let’s take this opportunity to build a relationship with technology that we will not regret. Just in case.
Brian Katman is the co -founder of the abbreviation institution, a non -profit organization devoted to reducing societal consumption of animal products. His latest book and documentary is “Meat me in the middle of the road“