When Meta shared the raw computer code needed to build a chatbot last year, rival companies said Meta was releasing poorly understood and perhaps even dangerous technology into the world.

Now, in an indication that critics of sharing A.I. technology are losing ground to their industry peers, Google is making a similar move. Google released the computer code that powers its online chatbot on Wednesday, after keeping this kind of technology concealed for many months.

Much like Meta, Google said the benefits of freely sharing the technology — called a large language model — outweighed the potential risks.

The company said in a blog post that it was releasing two A.I. language models that could help outside companies and independent software developers build online chatbots similar to Google’s own chatbot. Called Gemma 2B and Gemma 7B, they are not Google’s most powerful A.I. technologies, but the company argued that they rivaled many of the industry’s leading systems.

“We’re hoping to re-engage the third-party developer community and make sure that” Google-based models become an industry standard for how modern A.I. is built, Tris Warkentin, a Google DeepMind director of product management, said in an interview.

Google said it had no current plans to release its flagship A.I. model, Gemini, for free. Because it is more effective, Gemini could also cause more harm.

This month, Google began charging for access to the most powerful version of Gemini. By offering the model as an online service, the company can more tightly control the technology.

Worried that A.I. technologies will be used to spread disinformation, hate speech and other toxic content, some companies, like OpenAI, the maker of the online chatbot ChatGPT, have become increasingly secretive about the methods and software that underpin their products.

But others, like Meta and the French start-up Mistral, have argued that freely sharing code — called open sourcing — is the safer approach because it allows outsiders to identify problems with the technology and suggest solutions.

Yann LeCun, Meta’s chief A.I. scientist, has argued that consumers and governments will refuse to embrace A.I. unless it is outside the control of companies like Google, Microsoft and Meta.

“Do you want every A.I. system to be under the control of a couple of powerful American companies?” he told The New York Times last year.

In the past, Google open sourced many of its leading A.I. technologies, including the foundational technology for A.I. chatbots. But under competitive pressure from OpenAI, it became more secretive about how they were built.

The company decided to make its A.I. more freely available again because of interest from developers, Jeanine Banks, a Google vice president of developer relations, said in an interview.

As it prepared to release its Gemma technologies, the company said that it had worked to ensure they were safe and that using them to spread disinformation and other harmful material violated its software license.

“We make sure we’re releasing completely safe approaches both in the proprietary sphere and within the open sphere as much as possible,” Mr. Warkentin said. “With the releases of these 2B and 7B models, we’re relatively confident that we’ve taken an extremely safe and responsible approach in making sure that these can land well in the industry.”

But bad actors might still use these technologies to cause problems.

Google is allowing people to download systems that have been trained on enormous amounts of digital text culled from the internet. Researchers call this “releasing the weights,” referring to the particular mathematical values learned by the system as it analyzes data.

Analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars. Those are resources that most organizations — let alone individuals — do not have.