Skip to main content

Featured Story

## "The Future of Crypto Trading: Why Futures Will Reign Supreme, Even with a Bitcoin Spot ETF

Futures Will Remain the Premier Crypto Game, Even with the Arrival of a Bitcoin Spot ETF Introduction The impending approval of a Bitcoin spot ETF has garnered significant attention in the cryptocurrency market. However, despite the excitement surrounding this development, it is important to recognize that futures trading will continue to dominate the crypto landscape. The Chicago Mercantile Exchange (CME), a renowned platform for traditional finance investors, has been a pivotal player in the crypto futures market and is expected to maintain its supremacy even after the introduction of a Bitcoin spot ETF. This article explores the reasons behind the enduring dominance of futures trading and highlights the challenges faced by the spot ETF in gaining traction. The Decline of Bitcoin Liquidity One of the key factors contributing to the continued prominence of futures trading is the declining liquidity of Bitcoin. While a Bitcoin spot ETF may attract fresh money to the market, it cann

Unveiling the Opacity: Stanford Study Reveals Decreasing Transparency in Major AI Foundation Models

Major AI foundation models like ChatGPT, Claude Bard, and LlaM A2 are garnering attention for their decreasing transparency, according to a recent study conducted by researchers at Stanford University's Center for Research on Foundation Models (CRFM), a part of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). This lack of transparency among companies in the foundation model space presents challenges for businesses, policymakers, and consumers alike. In response to this issue, various companies have expressed their differing views on openness and transparency. OpenAI, for example, has shifted its perspective on transparency, acknowledging that their initial thinking was flawed and now focusing on safely sharing access and benefits of their systems. Conversely, MIT research from 2020 suggests that OpenAI has a history of prioritizing secrecy and maintaining their image. Anthropic, on the other hand, demonstrates a commitment to transparent and interpretable AI systems. Additionally, Google recently announced the launch of a Transparency Center to address this issue. But why should users care about AI transparency? Less transparency hinders the ability of other businesses and academics to rely on commercial foundation models for their applications and research, respectively.

Less Transparency, Greater Challenges

The Stanford CRFM study highlights the decreasing transparency among major AI foundation models, such as ChatGPT, Claude Bard, and LlaM A2. This lack of transparency poses challenges for various stakeholders, including businesses, policymakers, and consumers. Without transparency, these parties face difficulties in understanding the inner workings and limitations of these models, making it harder for them to make informed decisions and navigate the AI landscape effectively.

Differing Views on Openness and Transparency

OpenAI, one of the key players in the foundation model space, has undergone a change in perspective regarding transparency. Initially embracing openness, they have now pivoted to prioritize safe sharing of access and benefits. This shift reflects their acknowledgment of the potential risks associated with unrestricted transparency. However, MIT researchers suggest that OpenAI has a tendency to prioritize secrecy and protect its image, potentially hampering its commitment to transparency.

Anthropic, a startup focused on AI safety, places a strong emphasis on transparency and interpretability in AI systems. Their core views align with the importance of setting transparency and procedural measures to ensure verifiable compliance with their commitments. This commitment to transparency sets them apart from other players in the field and demonstrates their dedication to ethical AI practices.

Google's Efforts Towards Transparency

In August of 2023, Google announced the launch of its Transparency Center, which aims to address the issue of AI transparency. This initiative reflects the company's commitment to disclosing its policies and providing better visibility into its AI practices. By taking steps towards transparency, Google aims to build trust with users and stakeholders, ensuring that they are informed about the AI technologies they interact with.

The Importance of AI Transparency

Users should care about AI transparency because it directly impacts their ability to safely build applications and rely on foundation models. Less transparency among AI models makes it challenging for businesses to determine if they can safely build applications that rely on these models. Without transparency, businesses may face unforeseen risks and limitations, hindering their ability to innovate and deliver reliable AI-powered solutions.

Academics also rely on commercial foundation models for their research. Less transparency limits their understanding of the models' inner workings and may prevent them from effectively utilizing these models in their studies. Transparent AI models enable researchers to explore the strengths and weaknesses of these models, contributing to the advancement of AI knowledge and applications.

In conclusion, the decreasing transparency of major AI foundation models poses challenges for businesses, policymakers, and consumers. Companies in the space, such as OpenAI and Anthropic, have different perspectives on openness and transparency. Google's Transparency Center initiative reflects their commitment to address this issue. Users should prioritize AI transparency as it directly affects their ability to build applications and rely on foundation models, while also impacting academic research in the field.


Trending Stories