Anthropic Seeks Claude Mythos Access For Banking Sector
Google announced a $10 billion cash investment on Friday to support the expansion of Anthropic's computing capacity. Additionally, news reports indicate the NSA has already limited certain aspects of the technology in court. The model will be limited to high-stakes infrastructure and government use via 'Project Glass Swing'. Anthropic's Project Glasswing aims to limit the use of the Claude Mythos model to high-stakes government and critical infrastructure sectors. The restriction follows concerns regarding the tool's ability to reveal vulnerabilities in banking and other sensitive systems. The latest LLM iteration, Claude Mythos, possesses capabilities that prevent its release to the general public. The company views the AI sector as a diverse field requiring specific regulatory considerations.
Topics
Developing
- 863d Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore.
- 863d Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
- 863d Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est.
- 863d Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium.
Sources · 7 independent
“major expansion of Anthropic's computing capacity. This investment will then be tri- that's trying to do something different than the others.”
“Google announced it would invest an immediate $10 billion in cash to help support a major expansion of Anthropics' computing capacity. This investment will then be tripled if performance targets are met.”
“The core of that standoff was the insistence of Anthropic that on the way in which, for autonomous weapons, there should be oversight and mass surveillance should not be enabled.”
“last week Anthropic announced it was investigating a claim that its cyber security tool, Claude Mythos, had might have been accessed by unauthorized users.”
“In February, the US Defense Department ended a $200 million contract with the firm over its use of AI services because Anthropics refused to loosen its own ethical guidelines.”
“Anthropic announced it was investigating a claim that its cyber security tool, Claude Mythos, had might have been accessed by unauthorized users.”
“In February, the US Defense Department ended a $200 million contract with the firm over Thank you for watching.”
“Anthropic refused to release this to the public, this Claude Mythos cyber tool, saying it's too dangerously powerful.”
“The core of that standoff was the insistence of Anthropic that on the way in for autonomous weapons, there should be oversight and mass surveillance should not be enabled.”
“how Anthropics had been operating in recent months, and what it told”
“last week, Anthropics announced it was investigating a claim that its cybersecurity tool, Claude Mythos, might have been accessed by unauthorized users.”
“Department ended a $200 million contract with the firm over reduce it. to the public, this clawed mythos cyber tool, saying it's too dangerously powerful.”
“Why does Anthropic think that its most powerful AI tool, which for some reason is called Claude Mythos, why does it think it's so powerful it cannot be released to the public?”
“Anthropic announced it was investigating a claim that its cybersecurity tool, Claude Mythos, might have been accessed by unauthorized users.”
“Anthropics refused to release this to the public, this Claude Mythos cyber tool, saying it's too dangerously powerful. ... Claude Mythos had might have been accessed by unauthorized users.”
“what it told tool, Claude Mythos, had might have been accessed by unauthorized users.”
“might have been accessed by unauthorized users. Now, Anthropics refuse to release. too dangerously powerful. Thomas Woodside is co-founder and senior policy advisor at”
“found brand new vulnerabilities in many operating systems. So they're only releasing.”
“I've been asking him how Anthropic had been operating in recent months and what it told us about how the company saw itself.”
“might have been accessed by unauthorized users. Now, Anthropics refuse to release.”
“too dangerously powerful. Thomas Woodside is co-founder and senior policy advisor at”
“so far we've found no category or complexity of vulnerability that humans can find this model can't”
“Why does Anthropic think that its most powerful AI tool, which for some reason is called Claude Mythos, why does it think it's so powerful it cannot be released to the public?”
“might have been accessed by unauthorized users. Now, Anthropic's refuse to release.”
“found brand new vulnerabilities in many operating systems. So they're only releasing. category or complexity of vulnerability that humans can find this model can't.”
“it's capable of finding and exploiting security vulnerabilities at a massive scale and that it's already found brand new vulnerabilities in many operating systems.”
“Why does Anthropoc think that its most powerful AI tool, which for some reason is called Claude Mythos, why does it think it's so powerful it cannot be released to the public?”
“Anthropic's refused to release this to the public, this Claude Mythos cyber tool, saying it's too dangerously powerful.”
“it's capable of finding and exploiting security vulnerabilities at a matter of-”
“category or complexity of vulnerability that humans can find this model can't.”
“so far we've found no category or complexity of vulnerability that humans can find this model can't.”
“New York and California passed somewhat different laws that don't limit liability, but instead require companies to have safety plans and to follow those safety plans and also to report certain incidents.”
“The federal government has been very slow to act on putting in AI protections. And so I think the states are leading on that issue.”
“The federal government has been very slow... they had a contract dispute with the Department of Defense. Their dispute was sort of over whether their AI systems would be used in autonomous weapons or mass domestic surveillance.”
“The federal government has been very slow to act on putting in AI protections. And so I think the states are leading on that issue.”
“Their dispute was sort of over whether their AI systems would be used in autonomous weapons or mass domestic surveillance. And the department's position was sort of that they weren't going to do this.”
“There are news reports recently that the NSA is using Anthropics Claude Mithos model. It remains to be seen how significant this kind of dispute is going to be.”
“anthropics position was actually that this liability limitation should not be there. So I think that it is”
“President Trump made comments about Anthropic. He continued to say that they were left-wing, but he said they were also high IQ.”
“And also some of their dispute has been sort of limited already in courts. So I think it remains to be seen exactly how significant that will be.”
“it is going to be at this point, as part of this thing called project glass swing be used only in high stakes infrastructures by government”
“liability limitation should not be there. So I think that it is to be at this point as part of this thing called project glasswing be used only in high stakes infrastructures by government etc.”
“the fact that METOS, which is the latest model in a series of various iterations of the LLM called large-language model called CLAWD, is actually has capabilities that means that it cannot be released to the general public”
“seeking methos access for the banking sector because a tool and innovation that is so powerful that can reveal and these vulnerabilities that have sometimes existed for decades is really actually important”
“seeking methos access for the banking sector. Because a tool and innovation that is so powerful that can reveal these vulnerabilities that have sometimes existed for decades is really actually important”
Unlock the full story
Get a Pro subscription or above to see the live story progression and the full list of independent sources confirming each event as they happen.
Log in to upgrade