Clément Delangue says AI model restrictions raise greater risks
Hugging Face CEO Clément Delangue stated that restrictions on powerful open-source AI models create greater risks than their release. He referenced unmaterialized warnings from six or seven years ago about GPT-2. Delangue noted Mythos faced similar extreme-danger claims yet achieved broad adoption within weeks or months. He added that open availability speeds defensive cybersecurity gains through wider scrutiny and prevents capability imbalances from closed systems.
Co-sign.
We asked the CEO of HuggingFace @ClementDelangue what the risks of releasing powerful open source models are. He says restricting AI creates more risk than openness. "Six, seven years ago, at the time it was GPT-2, and there was already a lot of people saying that it was too dangerous to release in open source." "Mythos, when it was announced was crazy dangerous... In a few weeks or a few months, everyone is gonna be using Mythos, and not destroy the world as a result." "For cybersecurity, the biggest risk is that a few players have capabilities that other people don't have... If you make it more open, it's usually easier for defenders to react and make the whole system safer." "The idea of restricting a technology like AI based on risks is like saying, 'Some people can punch other people, so let's tie down everybody's hands.'" "Otherwise you slow down progress, you create massive gaps in terms of controls, in terms of capabilities, and you create actually additional risks."
Good catch @theojaffee 😂
We asked the CEO of HuggingFace @ClementDelangue what the risks of releasing powerful open source models are. He says restricting AI creates more risk than openness. "Six, seven years ago, at the time it was GPT-2, and there was already a lot of people saying that it was too dangerous to release in open source." "Mythos, when it was announced was crazy dangerous... In a few weeks or a few months, everyone is gonna be using Mythos, and not destroy the world as a result." "For cybersecurity, the biggest risk is that a few players have capabilities that other people don't have... If you make it more open, it's usually easier for defenders to react and make the whole system safer." "The idea of restricting a technology like AI based on risks is like saying, 'Some people can punch other people, so let's tie down everybody's hands.'" "Otherwise you slow down progress, you create massive gaps in terms of controls, in terms of capabilities, and you create actually additional risks."
@ClementDelangue @theojaffee my god
Good catch @theojaffee 😂
Asymmetry of capabilities is when there is great risk.
Diffusion of capabilities maintains an adversarial equilibrium and stability.
Don't let the fear-mongerers win.
We asked the CEO of HuggingFace @ClementDelangue what the risks of releasing powerful open source models are. He says restricting AI creates more risk than openness. "Six, seven years ago, at the time it was GPT-2, and there was already a lot of people saying that it was too dangerous to release in open source." "Mythos, when it was announced was crazy dangerous... In a few weeks or a few months, everyone is gonna be using Mythos, and not destroy the world as a result." "For cybersecurity, the biggest risk is that a few players have capabilities that other people don't have... If you make it more open, it's usually easier for defenders to react and make the whole system safer." "The idea of restricting a technology like AI based on risks is like saying, 'Some people can punch other people, so let's tie down everybody's hands.'" "Otherwise you slow down progress, you create massive gaps in terms of controls, in terms of capabilities, and you create actually additional risks."
"For cybersecurity, the biggest risk is that a few players have capabilities that other people don't have... If you make it more open, it's usually easier for defenders to react and make the whole system safer."
"The idea of restricting a technology like AI based on risks is like saying, 'Some people can punch other people, so let's tie down everybody's hands.'"
Clem is spot on here.
We asked the CEO of HuggingFace @ClementDelangue what the risks of releasing powerful open source models are. He says restricting AI creates more risk than openness. "Six, seven years ago, at the time it was GPT-2, and there was already a lot of people saying that it was too dangerous to release in open source." "Mythos, when it was announced was crazy dangerous... In a few weeks or a few months, everyone is gonna be using Mythos, and not destroy the world as a result." "For cybersecurity, the biggest risk is that a few players have capabilities that other people don't have... If you make it more open, it's usually easier for defenders to react and make the whole system safer." "The idea of restricting a technology like AI based on risks is like saying, 'Some people can punch other people, so let's tie down everybody's hands.'" "Otherwise you slow down progress, you create massive gaps in terms of controls, in terms of capabilities, and you create actually additional risks."