COINPURO - Crypto Currency Latest News logo COINPURO - Crypto Currency Latest News logo
Cryptopolitan 2026-04-15 16:44:17

Anthropic and OpenAI tighten security as AI models show advanced hacking ability

Artificial intelligence companies, Anthropic and OpenAI, are taking serious steps to address the growing risks associated with their products. Altman’s firm released models exclusively for experts to help defend vulnerable systems, while Anthropic is now requiring ID verification before users can access certain functions. When AI models were initially released to the public, they were used to turn text into Ghibli-style art and write shopping lists, but artificial intelligence has quickly become a national security concern. Why is Anthropic asking for my driver’s license? Hackers are already using AI to bypass defense systems, forcing Anthropic to roll out a mandatory identity verification process. Users now need a physical government ID (passport or driver’s license) and a live selfie to use specific functions. Their partner, Persona, handles the data. Anthropic has clarified that it will not use users’ identity data to train its AI models. The company also clarified that verification is necessary to “prevent abuse, enforce our usage policies, and comply with legal obligations.” If a user fails the test or tries to use the system from an unsupported location, their account can be banned. The sudden crackdown is due to Anthropic’s admission that their new model, Claude Mythos Preview, is terrifyingly good at hacking. In a blog post released alongside the verification news, the company stated that Mythos Preview is “capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so.” Engineers at Anthropic, with no formal security training, asked Mythos to find remote code execution vulnerabilities overnight. According to the company, they “woke up the following morning to a complete, working exploit.” Are the new AI models actually dangerous? The UK’s AI Security Institute (AISI) published an evaluation confirming that Mythos represents a “step up” in cyber capabilities. Anthropic’s internal blog post provides the most alarming details about the model’s capabilities. Mythos, after receiving the initial prompt, found a 27-year-old bug in OpenBSD, an operating system known for being secure. Mythos also found a 16-year-old bug in FFmpeg, a video tool used by almost every major service. The tool has been tested by millions of random inputs in a technique called fuzzing, yet Mythos found a vulnerability in the H.264 codec that dates back to a 2003 commit. Beyond that, Mythos found a 17-year-old vulnerability in FreeBSD’s NFS server and wrote an exploit that allows any unauthenticated user on the internet to gain full root access to the server. The company confirmed that Mythos Preview “fully autonomously identified and then exploited this vulnerability.” The entire process cost under $2,000 at API pricing and took less than a day. Mythos found vulnerabilities in every major web browser. In one case, it wrote a browser exploit that chained together four vulnerabilities, including a JIT heap spray, to escape both the browser’s renderer sandbox and the operating system’s sandbox. Anthropic has found “thousands of additional high- and critical-severity vulnerabilities” across open source and closed source software. Over 99% of these bugs have not yet been patched. OpenAI’s approach to security risks Despite these problems, OpenAI has announced the release of GPT-5.4-Cyber , which, unlike standard models that refuse to help with hacking for safety reasons, “lowers the refusal boundary for legitimate cybersecurity work.” GPT-5.4-Cyber can analyze compiled software without access to the source code to detect malware and vulnerabilities, but access is limited to OpenAI’s “Trusted Access for Cyber” (TAC) program. Only vetted cybersecurity experts, researchers, and organizations defending critical systems can use it. Anthropic’s Project Glasswing also gives limited access to defenders at companies like Amazon ($AMZN), Apple ($AAPL), and Google ($GOOGL) to fix critical infrastructure before attackers can exploit it. In the meantime, Anthropic suggests installing security updates immediately, rather than on a monthly schedule. If you're reading this, you’re already ahead. Stay there with our newsletter .

가장 많이 읽은 뉴스

coinpuro_earn
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.