OpenAI CEO sees opportunity in Japan

Sam Altman, CEO of OpenAI, the US venture firm that developed ChatGPT, is visiting Japan, where he met with Prime Minister Kishida Fumio on Monday morning.

After the meeting, Altman summed up the agenda in clear terms. "We talked about the upside of this technology and how to mitigate the downside," he said.

Sam Altman
OpenAI CEO Sam Altman visits the prime minister's office in Tokyo.

Altman added that he is looking at opening an office in Japan, saying he'd like to "build something great for the Japanese people and make models better with Japanese language and Japanese culture."

The government also sees potential in the technology. Chief Cabinet Secretary Matsuno Hirokazu mentioned it during his regular news conference on Monday morning.

He said AI may come under consideration in a bid to reduce the workload on civil servants.

But he also mentioned the risks inherent in the new technology. "The government will continue to work to grasp developments in AI technology while considering how to handle classified information and respond to concerns about information leaks," he said.

Academic concerns on the rise

The meeting between Kishida and Altman comes at a time when some Japanese universities have set standards for the use of ChatGPT and other AI technologies. Many are warning that the potential impact on education remains largely unknown.

ChatGPT can create natural sentences as if they were written by humans. It can also easily generate reports and papers.

At an entrance ceremony at Kyoto University on Friday the university's president, Minato Nagahiro, pointed out that sentences written using AI may contain incorrect information. He said the usual human-led verification process is missing, and went on to say that he wants students to take time to produce their own work.

The University of Tokyo published a call from its vice president. Writing on an in-house website he said students cannot use only generative AI to create reports or academic theses, but should write them by themselves.

One of Japan's most prestigious colleges, the University of Tokyo, has issued guidelines for student use of AI chatbot technologies.

Sophia University also released guidelines saying it will not allow the use of AI technology in reports, essays and dissertations. It said the university will take strict measures if its use is confirmed, but that it will be allowed within the scope of instructions from teachers.

Tohoku University called for caution on a different front related to AI. It said, "If you enter unpublished papers or information that should be kept secret, there is a possibility that the information may be leaked unintentionally."

The Ministry of Education says it also plans to create documents on how schools should handle the use of AI chatbot technologies.

Human element a factor

The rapid spread of the technology is making how it's handled increasingly important.

Cybersecurity experts are warning that anyone could use AI-powered chatbots to write deceptive phishing emails or malware. They also say the technology can help program ransomware cyberattacks.

ChatGPT is specifically designed not to respond to queries potentially linked to illegal activity. The software refuses to answer direct commands to write malware, but it does comply when the questions are rephrased.

Artificial intelligence experts, industry executives including Elon Musk, and more than 1,000 others issued an open letter in March calling for a six-month pause in developing systems more powerful than OpenAI's current version, GPT-4.

Elon Musk
Despite being one of the co-founders of OpenAI, Elon Musk has called for a pause in the development of more powerful systems.

The letter says, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

Italy's data protection authority announced late last month that it will temporarily ban the use of ChatGPT in the country.

It says OpenAI failed to properly inform users about the data it is collecting or to ensure that those using ChatGPT are above a certain age. It says that with no legal basis, the developer apparently collects a vast array of data to train artificial intelligence, which may be in violation of Italian laws on personal data protection.