Anthropic

Anthropic

Research Services

Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.

About us

We're an AI research company that builds reliable, interpretable, and steerable AI systems. Our first product is Claude, an AI assistant for tasks at any scale. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.

Website
https://www.anthropic.com/
Industry
Research Services
Company size
51-200 employees
Type
Privately Held

Employees at Anthropic

Updates

  • View organization page for Anthropic, graphic

    334,608 followers

    Tool use is now generally available. With tool use, Claude can, if instructed, intelligently select and orchestrate tools to solve complex tasks end-to-end. Start building today with the Anthropic API, Amazon Bedrock, or Google Vertex AI: https://lnkd.in/ewVXpBec. Tool use also supports message streaming, "forced" tool choice, and vision support—helping you build more natural, focused, and multimodal experiences. Hear from some of our early customers: "Claude with tool use is accurate and cost-effective, and now powers our live voice-enabled AI tutoring sessions. Within just a few days, we integrated tools into our platform. As a result, our AI tutor, Spark.E, acts agentically—displaying interactive UIs, tracking student progress in context, and navigating through lectures and materials. Since implementing Claude with tool use, we've observed a 42% increase in positive human feedback." —Ryan Trattner, CTO and Co-Founder, StudyFetch "Claude 3 Haiku with tool use has been a game changer for us. After accessing the model and running our benchmarks on it, we realized the quality, speed, and price combination is unmatched. Haiku is helping us scale our customers' data extraction tasks to a completely new level." —Faisal Ilaiwi, Co-Founder, Intuned (YC S22) "We leverage Claude 3 Haiku for generating live suggestions, automating prompt writing, and extracting key metadata from long documents. Claude 3 Haiku's tool use feature has unlocked capabilities and speed for our platform to generate reliable suggestions and prompts in real-time." —Divya Mehta, Product Manager, Hebbia AI

    View organization page for Anthropic, graphic

    334,608 followers

    Tool use is now available in beta to all customers in the Anthropic Messages API, enabling Claude to interact with external tools using structured outputs. If instructed, Claude can enable agentic retrieval of documents from your internal knowledge base and APIs, complete tasks requiring real-time data or complex computations, and orchestrate Claude subagents for granular requests. We look forward to your feedback. Read more in our developer documentation: https://lnkd.in/gknKP_rP

  • View organization page for Anthropic, graphic

    334,608 followers

    Our most intelligent model, Claude 3 Opus, is now available on Vertex AI. Alongside Haiku and Sonnet, Opus provides businesses with industry-leading accuracy, coding, reasoning, and vision capabilities.

    View organization page for Google Cloud, graphic

    2,341,267 followers

    We've got some exciting announcements today with Anthropic 🎉 Take a look: 📣 Claude 3 Opus, Anthropic’s most intelligent model, is now generally available on Vertex AI. ⚙️ Tool use is available for the Claude 3 model family on Vertex AI, enabling Claude to act as an autonomous agent to handle tasks requiring real-time data or complex computations. 📈 Provisioned throughput is available for the Claude 3 model family on Vertex AI to get assured performance and predictable costs for your production workloads. Learn how to get started → https://goo.gle/4aHWL7U

    • No alternative text description for this image
  • View organization page for Anthropic, graphic

    334,608 followers

    This week, we showed how altering internal "features" in our AI, Claude, could change its behavior. We found a feature that can make Claude focus intensely on the Golden Gate Bridge. Now, for a limited time, you can chat with Golden Gate Claude: claude.ai. Our goal is to let people see the impact our interpretability work can have. The fact that we can find and alter these features within Claude makes us more confident that we’re beginning to understand how large language models really work. Read more: https://lnkd.in/ehaU7EbF

    • No alternative text description for this image
  • View organization page for Anthropic, graphic

    334,608 followers

    New Anthropic research paper: Scaling Monosemanticity. The first ever detailed look inside a leading large language model. Read the blog post here: https://lnkd.in/eyGAH4yF Our previous interpretability work was on small models. Now we've dramatically scaled it up to a model the size of Claude 3 Sonnet. We find a remarkable array of internal features in Sonnet that represent specific concepts—and can be used to steer model behavior. The problem: most LLM neurons are uninterpretable, stopping us from mechanistically understanding the models. In October, we showed that the technique of dictionary learning could decompose a small model into "monosemantic" components we call "features"—making the model more interpretable. For the first time, we’ve extracted millions of features from a high-performing, deployed model (Sonnet). These features cover specific people and places, programming-related abstractions, scientific topics, emotions, among a vast range of other concepts. These features are remarkably abstract, often representing the same concept across contexts and languages, even generalizing to image inputs. Importantly, they also causally influence the model’s outputs in intuitive ways. Among these millions of features, we find several that are relevant to ensuring model safety and reliability. These include features related to code vulnerabilities, deception, bias, sycophancy, power-seeking, and criminal activity. One notable example is a "secrecy" feature. We observe that it fires for descriptions of people or characters keeping a secret. Activating this feature results in Claude withholding information from the user when it otherwise would not. This work is preliminary. Whereas we show that there are many features that seem plausibly relevant to safety applications, much more work is needed to establish that our approach is useful in practice. Our research builds on prior work in sparse coding, compressed sensing, and disentanglement in machine learning, mathematics, and neuroscience. We are also pleased to see work from many other research groups applying dictionary learning and related methods to interpretability. There’s much more in our paper, including detailed analysis of the breadth and specifics of features, many more safety-relevant case studies, and preliminary work on using features to study computational "circuits" in models. Read the full paper here: https://lnkd.in/eaYkgSNM.

    Mapping the Mind of a Large Language Model

    Mapping the Mind of a Large Language Model

    anthropic.com

  • View organization page for Anthropic, graphic

    334,608 followers

    Welcoming Mike Krieger to Anthropic: https://lnkd.in/eg63wQQp

    View profile for Mike Krieger, graphic

    Instagram co-founder, now CPO at Anthropic

    I'm thrilled to announce that I've joined Anthropic as their Chief Product Officer! The team at Anthropic is exceptional, and I felt at home from my first conversations with them. Daniela, Dario, Jared and the team embody the combination of deep talent, high empathy, and pragmatism that I loved about the Instagram and Artifact teams I’ve had the fortune to work with. Anthropic's research continues to be at the forefront of AI. When paired with thoughtful product development, I tons of potential to positively impact how people and companies get their work done. And as a two time entrepreneur, I'm particularly excited by how Claude, along with the right scaffolding and product features, can empower more people to innovate at a faster pace and at a lower cost. I'm eager to dive in and help shape the future of AI-powered products at Anthropic. We’re hiring in many areas including product engineering and more; check out our LinkedIn for open roles.

Similar pages

Browse jobs

Funding