Skip to main content

Featured Story

PYUSD Loans and Tokenized Assets: A New Era in DeFi

Unleashing Potential: PYUSD Loans and Tokenized Real World Assets In a groundbreaking development within the decentralized finance (DeFi) sector, a Swiss-based platform, Backed, has emerged as a pivotal player by powering PYUSD loans through tokenized Treasury Bill ETFs. This innovative approach not only enhances the utility of PYUSD but also provides new avenues for users to earn yield on their deposits, thus reshaping the landscape of stablecoins and lending markets. The Mechanics of PYUSD Loans Depository Functionality : Users can deposit PYUSD, a regulated USD stablecoin issued by Paxos for PayPal, into a Morpho Blue vault. This vault supports two types of collateral: Backed's tokenized Treasury Bill ETFs Lido’s wstETH Yield Generation : Depositors of PYUSD earn yield by lending to borrowers who take out loans. This dual engine mechanism—an innovative blend of real-world yields and crypto rewards—optimizes returns across varying market conditions. Tokenized Rea

OpenAI Unveils New Deepfake Detector with 99% Reliability

OpenAI, a pioneer in the field of generative AI, has taken on the task of combating deepfake imagery in response to the growing prevalence of misleading content on social media. During the recent Wall Street Journal's Tech Live conference in Laguna Beach, California, OpenAI's chief technology officer, Mira Murati, unveiled a new deepfake detector. Murati claims that OpenAI's tool has an impressive 99% reliability in determining whether an image was produced using AI.

The rise of AI-generated images has brought both potential and pitfalls. While lighthearted creations like Pope Francis sporting a puffy Balenciaga coat may seem harmless, deceptive images can have severe consequences, even causing financial havoc. As AI tools become increasingly sophisticated, the challenge lies in distinguishing between what is real and what is AI-generated.

While the release date of OpenAI's deepfake detector remains undisclosed, its announcement has generated significant interest, particularly in light of the company's previous endeavors. In January 2022, OpenAI introduced a text classifier that claimed to differentiate between human writing and machine-generated text from models like ChatGPT. However, by July, the tool was quietly shut down due to an unacceptably high error rate. The classifier incorrectly labeled genuine human writing as AI-generated 9% of the time.

If Murati's claim holds true, this would mark a significant milestone for the industry, as current methods of detecting AI-generated images are typically not automated. Enthusiasts often rely on gut feelings and focus on well-known challenges that impede generative AI, such as accurately depicting hands, teeth, and patterns. The line between AI-generated images and AI-edited images remains blurry, especially when attempting to use AI to detect AI.

OpenAI's efforts in detecting harmful AI images go beyond just developing the deepfake detector. The company is also implementing guardrails to censor its own model, surpassing what is publicly stated in its content guidelines. As reported by Decrypt, OpenAI's Dall-E tool seems to be at the forefront of this initiative.

In conclusion, OpenAI's announcement of a new deepfake detector with a claimed 99% reliability has sparked considerable interest within the industry. The ability to accurately differentiate between AI-generated and real images would be a significant development, particularly in combating the spread of misleading content. However, the true impact of this tool remains to be seen, as previous attempts at automated detection have faced challenges. OpenAI's commitment to setting guardrails for its own models further demonstrates its dedication to addressing the potential harms of AI-generated content.

Comments

Trending Stories