Skip to main content

Featured Story

Stepn x Adidas Genesis Sneakers: A New Era in Fitness

The Stepn x Adidas Genesis Sneakers Collection: A Revolutionary Partnership The fusion of the digital and physical worlds is more than a trend; it is a burgeoning reality. The recent collaboration between Stepn and Adidas exemplifies this transformative shift. With the launch of the Genesis Sneakers collection, both companies are poised to redefine the boundaries of fitness, fashion, and technology in lifestyle rewards. This partnership is not only groundbreaking but also sets the stage for future innovations in the ever-evolving landscape of fitness applications and digital assets. A New Era of Phygital Experiences Stepn, a pioneering move-to-earn FitTech app, has taken a bold leap by teaming up with a global powerhouse like Adidas. This collaboration signifies a pivotal moment in the fitness and lifestyle sector, as highlighted by Stepn CEO Shiti Manghani: Phygital Partnership : The merging of physical and digital assets marks a new direction for lifestyle rewards. Enhanced...

OpenAI Unveils New Deepfake Detector with 99% Reliability

OpenAI, a pioneer in the field of generative AI, has taken on the task of combating deepfake imagery in response to the growing prevalence of misleading content on social media. During the recent Wall Street Journal's Tech Live conference in Laguna Beach, California, OpenAI's chief technology officer, Mira Murati, unveiled a new deepfake detector. Murati claims that OpenAI's tool has an impressive 99% reliability in determining whether an image was produced using AI.

The rise of AI-generated images has brought both potential and pitfalls. While lighthearted creations like Pope Francis sporting a puffy Balenciaga coat may seem harmless, deceptive images can have severe consequences, even causing financial havoc. As AI tools become increasingly sophisticated, the challenge lies in distinguishing between what is real and what is AI-generated.

While the release date of OpenAI's deepfake detector remains undisclosed, its announcement has generated significant interest, particularly in light of the company's previous endeavors. In January 2022, OpenAI introduced a text classifier that claimed to differentiate between human writing and machine-generated text from models like ChatGPT. However, by July, the tool was quietly shut down due to an unacceptably high error rate. The classifier incorrectly labeled genuine human writing as AI-generated 9% of the time.

If Murati's claim holds true, this would mark a significant milestone for the industry, as current methods of detecting AI-generated images are typically not automated. Enthusiasts often rely on gut feelings and focus on well-known challenges that impede generative AI, such as accurately depicting hands, teeth, and patterns. The line between AI-generated images and AI-edited images remains blurry, especially when attempting to use AI to detect AI.

OpenAI's efforts in detecting harmful AI images go beyond just developing the deepfake detector. The company is also implementing guardrails to censor its own model, surpassing what is publicly stated in its content guidelines. As reported by Decrypt, OpenAI's Dall-E tool seems to be at the forefront of this initiative.

In conclusion, OpenAI's announcement of a new deepfake detector with a claimed 99% reliability has sparked considerable interest within the industry. The ability to accurately differentiate between AI-generated and real images would be a significant development, particularly in combating the spread of misleading content. However, the true impact of this tool remains to be seen, as previous attempts at automated detection have faced challenges. OpenAI's commitment to setting guardrails for its own models further demonstrates its dedication to addressing the potential harms of AI-generated content.

Comments

Trending Stories