Amazon's AWS chip lab in Austin is developing its Trainium AI chips. The latest Trainium3 chip offers comparable performance while reducing costs by up to 50% compared to traditional cloud servers. These chips have also been optimized for AI inference, supporting services like Amazon Bedrock and mainstream AI models including Anthropic Claude, which have already been deployed on over 1 million Trainium2 chips. Recently, Amazon reached an agreement with OpenAI to provide 2 gigawatts of Trainium capacity. The team at the lab is focused on rapid "launch" and design of chips, aiming to offer an economical alternative to compete with Nvidia's dominant GPU market position.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin