Skip to main content

Featured Story

PYUSD Loans and Tokenized Assets: A New Era in DeFi

Unleashing Potential: PYUSD Loans and Tokenized Real World Assets In a groundbreaking development within the decentralized finance (DeFi) sector, a Swiss-based platform, Backed, has emerged as a pivotal player by powering PYUSD loans through tokenized Treasury Bill ETFs. This innovative approach not only enhances the utility of PYUSD but also provides new avenues for users to earn yield on their deposits, thus reshaping the landscape of stablecoins and lending markets. The Mechanics of PYUSD Loans Depository Functionality : Users can deposit PYUSD, a regulated USD stablecoin issued by Paxos for PayPal, into a Morpho Blue vault. This vault supports two types of collateral: Backed's tokenized Treasury Bill ETFs Lido’s wstETH Yield Generation : Depositors of PYUSD earn yield by lending to borrowers who take out loans. This dual engine mechanism—an innovative blend of real-world yields and crypto rewards—optimizes returns across varying market conditions. Tokenized Rea

Protecting Artists' Work: Nightshade - The Revolutionary Tool for Safeguarding Against Generative AI Image Theft

is to carefully craft the data used to train it. By corrupting the prompt-specific data that is fed into an image generator, Nightshade is able to poison generative AI models, rendering them unable to generate art. This groundbreaking tool provides a potential solution to the ongoing issue of intellectual property theft and the creation of AI deepfakes.

The concept of poisoning in machine learning models is not new. However, the idea of poisoning generative AI models was previously thought to be impossible due to their immense size and complexity. Nightshade challenges this belief by targeting individual prompts rather than the entire model. This approach allows for the disablement of the model's ability to generate art without affecting its overall functionality.

To achieve the desired effect of disabling the generative AI model, the poisoned data must be carefully crafted to appear natural and deceive both automated alignment detectors and human inspectors. This ensures that the corruption goes undetected, allowing Nightshade to effectively cripple the model's ability to create visual images.

The implications of Nightshade are significant, especially considering the rise of generative AI models in the mainstream. Companies such as Google, Amazon, Microsoft, and Meta have heavily invested in bringing generative AI tools to consumers, making the need to combat intellectual property theft and AI deepfakes increasingly crucial.

In July, researchers at MIT proposed a similar concept of injecting small bits of code into images to cause distortion and make them unusable. While this approach addresses the issue of image distortion, Nightshade takes a different route by targeting the prompt itself. By doing so, it addresses the root problem of generative AI models relying on massive libraries of existing art.

Although Nightshade is currently only a proof of concept, it opens up new possibilities for protecting artists' work and preventing unauthorized use of generative AI models. As Professor Ben Zhao points out, the easiest way to deceive an AI model is to carefully craft the data used to train it. With Nightshade, artists may have a powerful tool at their disposal to safeguard their creations in the rapidly expanding world of generative AI.

Comments

Trending Stories