
The days when the web dominate the implementation of social media updates or the exchanging memes. Earlier this year, for the first time since the data tracking, Web browsing robots, instead of humansIt represents the largest part of the traffic on the web.
More than half of this robot movement of harmful robots, as they left personal data without protected online, for example. But an increasing percentage comes from robots sent by artificial intelligence companies to collect data to their models or respond to user claims. In fact, ChatGPT-User, a BOT that operates Chatgpt from Openai, is now responsible for 6 percent of all web traffic, while Claudbot, an automatic system developed by AI, 13 percent.
Artificial intelligence companies say this is vital to keeping their models to keep their models. However, the content creators feel differently, seeing artificial intelligence robots as tools to violate copyright on a large scale. Earlier this year, for example, Disney and Universal sued Midjournyy, on the pretext that the technology company’s photo generator falls out of famous privileges such as Star Wars and Relative.
A few content creators have money for lawsuits, so some adopt more radical methods of response. They use online tools that make it difficult for artificial intelligence robots to find their content – or who treat them in a way Robots are deceived in reading themSo that artificial intelligence begins to confuse car pictures with cows pictures, for example. But while “male poisoning” can help the content created to protect their work, it may make the web a more dangerous place.
Violation of copyright
For several centuries, the imitator made a rapid profit by simulating the work of artists. This is one of the reasons why we have intellectual property laws and copyrights. But arrival over the past few years is from artificial intelligence generators such as Midjourney or Openai’s Dall-Ee the problem.
The main concern in the United States is what is known as a fair use doctrine. This allows the use of samples of copyrights protected under certain circumstances without requesting permission from the copyright holder. The law of just use is deliberately flexible, but at its heart the idea that you can use an original work to create something new, provided that it is changed enough and has no effect on the harmful market on the original work.
Many artists, musicians and other activists argue that artificial intelligence tools weaken the borders between fair use and the violation of copyright to the cost of content. For example, it is not necessarily necessary for a person to draw a picture of Mickey Mouse, for example, Simpson’s world of his entertainment. But with artificial intelligence, it is now possible for anyone to spin large numbers of these images quickly and in the way the transformational nature of what they did is doubtful. Once they make these pictures, it will be easy to produce a group of shirts based on them, for example, which would express from personal to commercial use and violate the doctrine of just use.
They are keen to protect their commercial interests, some of the content in the United States are taking legal measures. Disney and Universal’s lawsuit against Midjourney, which was launched in June, is just another example. And include others An ongoing legal battle between New York Times And openai On the alleged use of the newspaper’s stories.

Disney filed a lawsuit against Ai Midjournyy on her photo generator, which they say is Disney’s characters.
Photo 12/my pain
Artificial intelligence companies strongly deny any violations, as they insist that the data bulldozing is permitted under the doctrine of just use. in An open message to the US Science and Technology Office In March, Chris Lehan, the chief international affairs official at Openai, warned of the rules of strict copyright in other places of the world, where There were attempts to provide stronger protection for copyright For the content of the content, “they suppress innovation and investment.” Openai has already said that it would be “impossible” to develop Amnesty International models that meet the needs of people without using copyrights. Google takes a similar view. In an open letter also published in March, the company said: “Three legal areas can hinder the appropriate access to the data needed to train the leading models: copyright, privacy and patents.”
However, at least at the present time, activists seem to have a public opinion court on their part. When the site analyzed the general responses to the inquiry about publishing rights and the Ambassador Organization by the Publishing Rights Office in the United States, I found it 91 percent of the comments It contains negative feelings about artificial intelligence.
What may not help artificial intelligence companies gain general sympathy They are resources to strive Perhaps Even forcing some websites to move to non -call – And that the content creators are unable to stop them. For example, there are techniques that the conflict creators can use for the disorder by crawling robots on their websites, including reinstalling a small file in the heart of the web site to say that robots are prohibited. But there are indications that robots can sometimes be Ignore such requests And continue crawling anyway.
Artificial intelligence data poisoning
No wonder that new tools are provided to content creators who provide stronger protection against artificial intelligence robots. One of these tools has been launched this year by Cloudflare, an internet infrastructure company that provides its users with protection from attacks of the distributed service (DDOS), where the striker is immersed with a web server with a lot of traffic to the point that it knocks on the same site without connecting to the Internet. To combat artificial intelligence robots that may pose the risk of DDOS, Cloudflare fights fire with fire: it produces a maze of pages that have been created from artificial intelligence full of nonsense so that the AI robots spend all its time and energy to look at nonsense, instead of the actual information they are looking for.
The tool, known as Amnesty International MazeIt is designed for a 50 billion days of requests daily from AI Crawles that Cloudflare says it faces web sites inside its network. According to Cloudflare, AI Labyrint must slow down, confuse, waste crawling resources of artificial intelligence and other robots that do not respect “not crawling” directions. Cloudflare has been released since then Another toolWhich is required of artificial intelligence companies to pay to access websites, or be banned from crawling their content.
The alternative is to allow AI to reach online content – but skillfully “poisoned” makes data less useful for robot purposes. Tools Glowing and NightshadeAnd developed at the University of Chicago, it has become essential for this type of resistance. They are both free to download from the university’s website and can run it on a user computer.
GLAZE, which was released in 2022, defenses defensively by applying incalculable changes, at the pixel level, or “style gowns”, the artist’s work. These changes, invisible for humans, cause misuse of artificial intelligence models of art style. For example, the waterboard may be seen as an oil plate. Nightshade, published in 2023, is a more offensive tool that is louder by image data – again, as much as it comes to humans – in a way that encourages AI model to make an incorrect link, such as learning to link the word “cat” with dog pictures. Each tool has been downloaded more than 10 million times.

The Nightshade tool gradually poisoning Amnesty International Robots to represent dogs as a cat
brown. Chao
It says tools of artificial intelligence in the hands of artists Ben Chao At the University of Chicago, a senior researcher behind both glag and today. “These are the companies that trillion dollars in the market, literally, the largest companies in the world, and they take by force what they want,” he says.
Using tools like Zhao’s is a way for artists to practice the small strength they enjoy on how to use their work. “Skille and Leone” says really interesting tools, which shows a wonderful way that does not depend on changing the regulations, which may take some time and may not be a place to benefit from artists. ” Jacob Hoffman Andrews At Frontier Electronic, a US -based digital rights.
The idea of the content that gets myself says to try to ward off copies of the new alleged imitation, he says Elionora Rosati At Stockholm University in Sweden. She says: “Once again a day, when there was a great unauthorized use of databases – from the phone evidence to patent lists – he was advised to put some errors to help you in the evidence,” she says. For example, mapping may deliberately include false places on its maps. If these wrong names appear later in a map produced by a competitor, it will provide clear evidence of plagiarism. This practice still publishes the headlines of the newspapers today: the genius of the words of genius music He claims that different types of workers have been included In its content, which I claimed that Google was using its content without permission. Google denies these allegations, and the case of the court submitted by a genius against Google has been rejected.
Even “sabotage” is not discussed, according to Hoffman Andrews. He says, “I do not think about it necessarily sabotage.” “These are the artist’s special pictures that they apply to their own amendments. They are completely free to do what they want their data.”
It is not known to what extent the artificial intelligence companies take their counter -measures to try to combat this poisoning of the well, either by ignoring any content characterized by poison or trying to remove it from data. But Zhao’s attempts to break his own system showed that glazing was still 85 percent effective against all counter -measures that he could consider, indicating that artificial intelligence companies may conclude that dealing with poisoned data is more problematic than it is worth.
Publish fake news
However, not only artists who have content to protect those who try to poison the well against artificial intelligence. Some national countries may use principles similar to paying wrong novels. For example, the US-based research council claimed that the Atlantic Council earlier this year that the news network in Russia-whose name means “truth” in Russian- Used poisoning To deceive Amnesty International robots in publishing fake news stories.
The Pravda approach, as it is claimed by Think Tank, includes the publication of millions of web pages, a kind of Cloudflare maze of artificial intelligence. But in this case, the Atlantic Council says the pages are designed to look like real news articles and are used to promote the Kremlin’s narration about Russia’s war in Ukraine. The huge size of the stories can lead to the crawl of artificial intelligence to emphasize some accounts when responding to users, and an analysis published by Newsguard in the United States this year, which tracks Pravda activities, 10 The main Chatbots text brought up in line with the views of Pravda In a third of cases.
The relative success in converting conversations highlights the problem inherent with all things of artificial intelligence: You can always choose the technological tricks used by good actors with goodwill by bad actors with necklace.
However, there is a solution to these problems, Zhao says – although artificial intelligence companies may not be ready to consider them. Instead of unprotected collection of any data they can find online, artificial intelligence companies can enter official agreements with legal content providers and ensure that their products are trained only with reliable data. But this approach carries a price, because licensing agreements can be expensive. “These companies are unwilling to license the work of these artists,” said Zhao. “In the root of all this is money.”
Topics:
- artificial intelligence/ /
- Chatgpt