Italy’s Data Protection Authority on Friday temporarily banned OpenAI’s ChatGPT chatbot and launched a probe over a suspected breach of the artificial intelligence application’s data collection rules.

The agency, also known as Garante, accused Microsoft Corp-backed MSFT.O ChatGPT of failing to check the age of its users who are supposed to be aged 13 and above.

ChatGPT has an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot, Garante said. OpenAI has 20 days to respond with remedies or could risk a fine of up to 4% of its annual worldwide turnover.

OpenAI did not immediately respond to a request for comment.

ChatGPT was still answering questions posted by Italian users on the platform on Friday evening.

The company was informed of the decision on Friday morning and it would have been materially impossible to pull the plug on access in Italy on the same day, but expect them to do it by Saturday, an authority spokesman said.

“If they ignore the ban, the authority can impose fines,” the spokesman said.

Italy, which provisionally restricted ChatGPT’s use of domestic users’ personal data, became the first Western country to take action against a chatbot powered by artificial intelligence.

The chatbot is also unavailable in mainland China, Hong Kong, Iran and Russia and parts of Africa where residents cannot create OpenAI accounts.

Since its release last year, ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products.

The rapid development of the technology has attracted attention from lawmakers in several countries. Many experts say new regulations are needed to govern AI because of its potential impact on national security, jobs and education.

The European Commission, which is debating the EU AI Act, may not be inclined to ban AI, European Commission Executive Vice President Margrethe Vestager tweeted.

“No matter which #tech we use, we have to continue to advance our freedoms & protect our rights. That’s why we don’t regulate #AI technologies, we regulate the uses of #AI,” she said. “Let’s not throw away in a few years what has taken decades to build.”

The EC did not respond to a request for comment.

On Wednesday, Elon Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society.

OpenAI has not provided details on how it trains its AI model.

“The lack of transparency is the real problem,” said Johanna Björklund, AI researcher and associate professor at Umeå University in Sweden. “If you do AI research, you should be very transparent about how you do it,”

ChatGPT is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study published last month.