Monday, December 30, 2024

Keep the code behind AI open, say two entrepreneurs

Date:


No one doubts that artificial intelligence (AI) will change the world. But a doctrinal dispute continues to rage over the design of AI models, namely whether the software should be “closed-source” or “open-source”—in other words, whether code is proprietary, or public and open to modification by anyone.

Some argue that open-source AI is a dead end or, even worse, a threat to national security. Critics in the West have long maintained that open-source models strengthen countries like China by giving away secrets, allowing them to identify and exploit vulnerabilities. We believe the opposite is true: that open-source will power innovation in AI and continue to be the most secure way to develop software.

This is not the first time America’s tech industry and its standard-setters and regulators have had to think about open-source software and open standards with respect to national security. Similar discussions took place around operating systems, the internet and cryptography. In each case, the overwhelming consensus was that the right way forward was openness.

There are several reasons why. One is that regulation hurts innovation. America leads the world in science and technology. On an even playing field it will win. With one hand tied behind its back it might well lose. That’s exactly what it would do by restricting open-source AI development. A potential talent pool that once spanned the globe would be reduced to one spanning the four walls of the institution or company that developed the model. Meanwhile, the rest of the world, including America’s adversaries, would continue to reap the benefits of open-source and the innovation it enables.

A second reason is the widely accepted view that open-source makes systems safer. More users—from government, industry and academia, as well as hobbyists—means more people analysing code, stress-testing it in production and fixing any problems they identify.

A good example in the sphere of national security is Security-Enhanced Linux (SELinux). It was originally developed by the America’s National Security Agency as a collection of security patches for the open-source Linux operating system, and has been part of the official Linux distribution for more than 20 years. This learn-from-others approach is vastly more robust than one based on proprietary operating systems that can only be fixed by their vendors, on whatever timelines they can manage.

There is much discussion in Western national-security circles about preventing other states from gaining access to state-of-the-art AI technology. But restricting open-source will not accomplish this goal. In the case of China, that is because the horse has bolted. China is already at the cutting edge of AI: it may well have more AI researchers than America, and it is already producing very competitive models. According to one popular system for ranking large language models, China has three of the world’s top seven open-source models.Some Chinese companies are also finding ways to get around export controls on graphics processor units (GPUs), specialised circuits that excel at algebra. Even American companies are not easily persuaded to overlook billions in revenue. A previous attempt at prohibiting the export of high-end Intel chips resulted in China developing the world’s fastest supercomputer using a novel, internally developed computing architecture.

The inability of American companies to keep proprietary, infrastructure-critical IP secure has a long history. Huawei, for instance, has publicly admitted to copying proprietary code from Cisco. As recently as March, the FBI apprehended a Chinese former Google engineer for allegedly stealing AI trade secrets from the company—which is renowned for its security.

A question to ask is whether we want to live in a world where we understand the fundamental nature of other countries’ AI capabilities—because they’re based in part on open-source technology—or a world where we’re trying to figure out how they work. There is no third option where China, for example, doesn’t have advanced AI capabilities.

The final reason to favour open-source is that it drives innovation. The argument that we should move away from open-source models because they cannot compete with proprietary models on performance or cost is plain wrong. Foundation models are on their way to becoming a key component of application infrastructure. And since at least the mid-1990s the majority of impactful new infrastructure technologies have been open-source.

There’s no clear reason why AI models will be different. Today’s AI is rooted in open-source and open research, and the stunning advances in generative AI over the past two years—with the rise of OpenAI, Mistral, Anthropic and others—can be largely attributed to the openness of the preceding decade. Today, many of the most advanced uses of AI are the product of developers running and fine-tuning open-source models. Many of the most advanced users of AI are in communities that have grown organically around open-source. The die has been cast.

There is, of course, room for different business and development models to thrive, and no one should take national security lightly. But restricting open-source would hamstring an approach that has held its own when it comes to security while driving three decades of innovation.

Martin Casado is a general partner at Andreessen Horowitz. Ion Stoica is a professor of computer science at UC Berkeley and co-founder and executive chairman of Databricks and Anyscale.

For a different view on the open-v-closed AI debate, see this article by Lawrence Lessig.

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess



Source link

Share post:

spot_img

Popular

More like this
Related