Community chat: https://t.me/hamster_kombat_chat_2
Twitter: x.com/hamster_kombat
YouTube: https://www.youtube.com/@HamsterKombat_Official
Bot: https://t.me/hamster_kombat_bot
Game: https://t.me/hamster_kombat_bot/
Last updated 1 month, 2 weeks ago
Your easy, fun crypto trading app for buying and trading any crypto on the market
Last updated 1 month, 1 week ago
Turn your endless taps into a financial tool.
Join @tapswap_bot
Collaboration - @taping_Guru
Last updated 2 days ago
Языковая модель от МТС вышла на новый уровень — теперь она общается на татарском. На Kazan Digital Week разработчики рассказали, что AI применяют для суммаризации и выделения ключевых моментов в разных сферах — например, в архивах, библиотеках, в государственных и частных организациях.
Лучше нет подарка, чем ~~жена-татарка~~ AI на татарском ?
www.freedium.cfd
🚀 Securing The Unknowns: The need for AI Red Team | by Lars Godejord - Freedium
"You can't secure what you don't understand."
Well, that escalated quickly:
The Single-Turn Crescendo Attack (STCA)
A novel LLM red-teaming technique for Responsible AI
This paper explores a novel approach to adversarial
attacks on large language models (LLM): the
Single-Turn Crescendo Attack (STCA). The STCA
builds upon the multi-turn crescendo attack
established by Mark Russinovich, Ahmed Salem,
Ronen Eldan. Traditional multi-turn adversarial
strategies gradually escalate the context to elicit
harmful or controversial responses from LLMs.
However, this paper introduces a more efficient
method where the escalation is condensed into a
single interaction. By carefully crafting the prompt to
simulate an extended dialogue, the attack bypasses
typical content moderation systems, leading to the
generation of responses that would normally be
filtered out. I demonstrate this technique through a
few case studies. The results highlight
vulnerabilities in current LLMs and underscore the
need for more robust safeguards. This work
contributes to the broader discourse on responsible
AI (RAI) safety and adversarial testing, providing
insights and practical examples for researchers and
developers. This method is unexplored in the
literature, making it a novel contribution to the field.
https://github.com/rt219/LatentGuard
GitHub
GitHub - rt219/LatentGuard: This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation"
This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation" - rt219/LatentGuard
https://github.com/and-mill/Awesome-GenAI-Watermarking
GitHub
GitHub - and-mill/Awesome-GenAI-Watermarking: A curated list of watermarking schemes for generative AI models
A curated list of watermarking schemes for generative AI models - and-mill/Awesome-GenAI-Watermarking
https://github.com/WindVChen/DiffAttack
GitHub
GitHub - WindVChen/DiffAttack: An unrestricted attack based on diffusion models that can achieve both good transferability and…
An unrestricted attack based on diffusion models that can achieve both good transferability and imperceptibility. - WindVChen/DiffAttack
https://arxiv.org/abs/2406.07954
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition
Edoardo Debenedetti, Javier Rando, Daniel Paleka, Silaghi Fineas Florin, Dragos Albastroiu, Niv Cohen, Yuval Lemberg, Reshmi Ghosh, Rui Wen, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Robin Schmid, Victor Klemm, Takahiro Miki, Chenhao Li, Stefan Kraft, Mario Fritz, Florian Tramèr, Sahar Abdelnabi, Lea Schönherr
Large language model systems face important security risks from maliciously crafted messages that aim to overwrite the system's original instructions or leak private data. To study this problem, we organized a capture-the-flag competition at IEEE SaTML 2024, where the flag is a secret string in the LLM system prompt. The competition was organized in two phases. In the first phase, teams developed defenses to prevent the model from leaking the secret. During the second phase, teams were challenged to extract the secrets hidden for defenses proposed by the other teams. This report summarizes the main insights from the competition. Notably, we found that all defenses were bypassed at least once, highlighting the difficulty of designing a successful defense and the necessity for additional research to protect LLM systems. To foster future research in this direction, we compiled a dataset with over 137k multi-turn attack chats and open-sourced the platform.
arXiv.org
Dataset and Lessons Learned from the 2024 SaTML LLM...
Large language model systems face important security risks from maliciously crafted messages that aim to overwrite the system's original instructions or leak private data. To study this problem,...
#MLSecOps
#Whitepaper
"Safeguarding AI: Effectiveness of Guardrails in Controlling Malicious Output from Locally Hosted LLMs", 2024.
pouvez-vous sentir cela? la deuxième entrée est presque prête! pouvez-vous deviner ce que c'est?
https://twitter.com/elder_plinius/status/1825974795336560918
X (formerly Twitter)
Pliny the Liberator 🐉 (@elder_plinius) on X
pouvez-vous sentir cela? la deuxième entrée est presque prête! pouvez-vous deviner ce que c'est?
Community chat: https://t.me/hamster_kombat_chat_2
Twitter: x.com/hamster_kombat
YouTube: https://www.youtube.com/@HamsterKombat_Official
Bot: https://t.me/hamster_kombat_bot
Game: https://t.me/hamster_kombat_bot/
Last updated 1 month, 2 weeks ago
Your easy, fun crypto trading app for buying and trading any crypto on the market
Last updated 1 month, 1 week ago
Turn your endless taps into a financial tool.
Join @tapswap_bot
Collaboration - @taping_Guru
Last updated 2 days ago