
Most people know that robots no longer look like a plate of garbage boxes. It looks like Siri, Alexa, and Jimini. They look like sounds in phone trees to support maze customers. Even the voices of the robot that are performed out of time through new sounds created from artificial intelligence that can mimic all empty differences in human speech, to specific regional dialects. And with a few seconds of sound, a person’s voice can now be cloned.
This technology will replace humans in many areas. Automated customer support will save money by lowering employees in communication centers. Artificial intelligence agents will make calls on our behalf, and talk to others in the natural language. All this happens, and it will be common.
But something is a fundamental difference from talking to a robot instead of someone. A person can be a friend. Artificial intelligence cannot be a friend, although how he dealt with or interacts with people. Artificial intelligence is at best an instrument, and at worst a means of manipulation. Humans need to know whether we are talking to a living person, breathing, or robot with the agenda that the person who controls it has placed. For this reason, robots should look like robots.
Not only can you call the speech resulting from artificial intelligence. It will come in many different forms. So we need a way to identify artificial intelligence that works regardless of the method. It needs to work for long or short voice shots, for only a long time. It needs to work for any language, and in any cultural context. At the same time, we should not restrict the development of the statute or the complexity of the language.
We have a simple suggestion: all AIS and modern robots should be used. In the middle of the twentieth century, before it was easy to create an actual industrial robotic discourse, the episode’s transformers were used to make the voices of robotic actors. Over the past few decades, we are used to mechanical voices, simply because the text systems were good enough to produce a clear speech that was not like a person in his voice. Now we can use this same technology to make the indisputable automatic speech from human vocal robotic again.
The cyclic rate has many advantages: it is simple in terms of mathematical, and it can be applied in actual time, and does not affect the clarity of sound, and most importantly-it is “automatic sounding” globally because of its historical use of robots.
The responsible artificial intelligence companies that offer an audio creation or voice assistants should add Amnesty International in any form of a loop for some standard frequencies (for example, between 30-80 Hz) and the minimum capacity (for example, 20 percent). That’s it. People will pick up quickly.
Here are some examples that you can listen to for examples of what we suggest. The first clip is “Podcast” created from artificial intelligence Google’s Notebooklm It includes two “hosts” artificial intelligence. The Google Notebooklm created the PodCast Script and Audio program with the text of this article only. The following two horses are characterized by the same with the AIS voices that have been modified more and less skillfully by the changing of the episode:
We were able to generate the sound effect using the Python 50 texts created by Antarbur Claude. It was one of the most famous robot sounds Delix from Dr. Hu In the 1960s. At that time, it was difficult to synthesize the sounds of the robot, so the sound was actually the sound of an actor through a changing ring. It was seized on about 30 Hz, as we did in our example, with the depth of a different modification (capacity) depending on the strength of the automatic effect. Our expectations are that the artificial intelligence industry will test and converge with a good balance of these parameters and settings, and you will use better tools than the text program for 50 lines, but this highlights the simplicity of its achievement.
Of course, there will also be eloquent uses of artificial intelligence sounds. Frauds that use audio cloning have become easier every year, but it has been possible for many years with the right knowledge. Just like we learn that we can no longer trust the pictures and videos that we see because it could have been easily created from artificial intelligence, we will all soon learn that someone seems to be a family member asking for money might be a deceptive using a sound removal tool.
We do not expect the fraudsters to follow our proposal: they will find a way, whatever the matter. But this is always true in security standards, and the tide raises all boats. We believe that the largest part of the uses will be with the famous audio applications facades of major companies-everyone should know that they are talking to a robot.
From your site articles
Related articles about the web