Skip to main content

Featured Story

Google AI Launches Gemma: New Open Source Language Models

Google AI Launches Gemma: A Game-Changer in Open Source Language Models Today marks a significant milestone in the realm of artificial intelligence as Google AI, a division of the tech giant, unveiled Gemma—a new family of open-source language models derived from their recently released Gemini suite of AI tools. This strategic move positions Google to directly compete with leading language models like Meta's LLaMa and Mistral, bringing forth a fresh wave of innovation. A Commitment to Open Source and Responsible AI Demis Hassabis, co-founder of Google DeepMind, articulated the company's philosophy in a recent tweet, stating, "We have a long history of supporting responsible open source and science, which can drive rapid research progress." This commitment to democratizing AI technology underscores Google's vision of making AI accessible and beneficial for all. Key Features of Gemma Gemma is released in two distinct versions: Gemma 2B : A lightweight m

Unveiling the Opacity: Stanford Study Reveals Decreasing Transparency in Major AI Foundation Models

Major AI foundation models like ChatGPT, Claude Bard, and LlaM A2 are garnering attention for their decreasing transparency, according to a recent study conducted by researchers at Stanford University's Center for Research on Foundation Models (CRFM), a part of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). This lack of transparency among companies in the foundation model space presents challenges for businesses, policymakers, and consumers alike. In response to this issue, various companies have expressed their differing views on openness and transparency. OpenAI, for example, has shifted its perspective on transparency, acknowledging that their initial thinking was flawed and now focusing on safely sharing access and benefits of their systems. Conversely, MIT research from 2020 suggests that OpenAI has a history of prioritizing secrecy and maintaining their image. Anthropic, on the other hand, demonstrates a commitment to transparent and interpretable AI systems. Additionally, Google recently announced the launch of a Transparency Center to address this issue. But why should users care about AI transparency? Less transparency hinders the ability of other businesses and academics to rely on commercial foundation models for their applications and research, respectively.

Less Transparency, Greater Challenges

The Stanford CRFM study highlights the decreasing transparency among major AI foundation models, such as ChatGPT, Claude Bard, and LlaM A2. This lack of transparency poses challenges for various stakeholders, including businesses, policymakers, and consumers. Without transparency, these parties face difficulties in understanding the inner workings and limitations of these models, making it harder for them to make informed decisions and navigate the AI landscape effectively.

Differing Views on Openness and Transparency

OpenAI, one of the key players in the foundation model space, has undergone a change in perspective regarding transparency. Initially embracing openness, they have now pivoted to prioritize safe sharing of access and benefits. This shift reflects their acknowledgment of the potential risks associated with unrestricted transparency. However, MIT researchers suggest that OpenAI has a tendency to prioritize secrecy and protect its image, potentially hampering its commitment to transparency.

Anthropic, a startup focused on AI safety, places a strong emphasis on transparency and interpretability in AI systems. Their core views align with the importance of setting transparency and procedural measures to ensure verifiable compliance with their commitments. This commitment to transparency sets them apart from other players in the field and demonstrates their dedication to ethical AI practices.

Google's Efforts Towards Transparency

In August of 2023, Google announced the launch of its Transparency Center, which aims to address the issue of AI transparency. This initiative reflects the company's commitment to disclosing its policies and providing better visibility into its AI practices. By taking steps towards transparency, Google aims to build trust with users and stakeholders, ensuring that they are informed about the AI technologies they interact with.

The Importance of AI Transparency

Users should care about AI transparency because it directly impacts their ability to safely build applications and rely on foundation models. Less transparency among AI models makes it challenging for businesses to determine if they can safely build applications that rely on these models. Without transparency, businesses may face unforeseen risks and limitations, hindering their ability to innovate and deliver reliable AI-powered solutions.

Academics also rely on commercial foundation models for their research. Less transparency limits their understanding of the models' inner workings and may prevent them from effectively utilizing these models in their studies. Transparent AI models enable researchers to explore the strengths and weaknesses of these models, contributing to the advancement of AI knowledge and applications.

In conclusion, the decreasing transparency of major AI foundation models poses challenges for businesses, policymakers, and consumers. Companies in the space, such as OpenAI and Anthropic, have different perspectives on openness and transparency. Google's Transparency Center initiative reflects their commitment to address this issue. Users should prioritize AI transparency as it directly affects their ability to build applications and rely on foundation models, while also impacting academic research in the field.

Comments

Trending Stories