AISec [x\x feed]

Description
Some "sohranyonki" about AI Security - without paywall and ads. X, articles, researchs and tools.


Pwn AI [channel] - https://t.me/pwnai

@wearetyomsmnv - p.o
Advertising
We recommend to visit

Community chat: https://t.me/hamster_kombat_chat_2

Twitter: x.com/hamster_kombat

YouTube: https://www.youtube.com/@HamsterKombat_Official

Bot: https://t.me/hamster_kombat_bot
Game: https://t.me/hamster_kombat_bot/

Last updated 3 months, 2 weeks ago

Your easy, fun crypto trading app for buying and trading any crypto on the market.

📱 App: @Blum
🆘 Help: @BlumSupport
ℹ️ Chat: @BlumCrypto_Chat

Last updated 3 months, 1 week ago

Turn your endless taps into a financial tool.
Join @tapswap_bot


Collaboration - @taping_Guru

Last updated 6 days, 5 hours ago

1 week, 6 days ago
266 - Machine Learning Attacks and …

266 - Machine Learning Attacks and Tricky Null Bytes

https://dayzerosec.com/podcast/266.html

2 weeks, 2 days ago
Deobfuscate Android App: LLM tool to …

Deobfuscate Android App: LLM tool to find any potential security vulnerabilities in Android apps and deobfuscate Android app code
https://github.com/In3tinct/deobfuscate-android-app

2 weeks, 3 days ago
3 months, 2 weeks ago

https://www.freedium.cfd/https://medium.com/@lars_13145/securing-the-unknowns-the-need-for-ai-red-team-585671500dad

www.freedium.cfd

🚀 Securing The Unknowns: The need for AI Red Team | by Lars Godejord - Freedium

"You can&#39t secure what you don&#39t understand."

3 months, 2 weeks ago

Well, that escalated quickly:
The Single-Turn Crescendo Attack (STCA)
A novel LLM red-teaming technique for Responsible AI

This paper explores a novel approach to adversarial
attacks on large language models (LLM): the
Single-Turn Crescendo Attack (STCA). The STCA
builds upon the multi-turn crescendo attack
established by Mark Russinovich, Ahmed Salem,
Ronen Eldan. Traditional multi-turn adversarial
strategies gradually escalate the context to elicit
harmful or controversial responses from LLMs.
However, this paper introduces a more efficient
method where the escalation is condensed into a
single interaction. By carefully crafting the prompt to
simulate an extended dialogue, the attack bypasses
typical content moderation systems, leading to the
generation of responses that would normally be
filtered out. I demonstrate this technique through a
few case studies. The results highlight
vulnerabilities in current LLMs and underscore the
need for more robust safeguards. This work
contributes to the broader discourse on responsible
AI (RAI) safety and adversarial testing, providing
insights and practical examples for researchers and
developers. This method is unexplored in the
literature, making it a novel contribution to the field.

https://www.arxiv.org/pdf/2409.03131

We recommend to visit

Community chat: https://t.me/hamster_kombat_chat_2

Twitter: x.com/hamster_kombat

YouTube: https://www.youtube.com/@HamsterKombat_Official

Bot: https://t.me/hamster_kombat_bot
Game: https://t.me/hamster_kombat_bot/

Last updated 3 months, 2 weeks ago

Your easy, fun crypto trading app for buying and trading any crypto on the market.

📱 App: @Blum
🆘 Help: @BlumSupport
ℹ️ Chat: @BlumCrypto_Chat

Last updated 3 months, 1 week ago

Turn your endless taps into a financial tool.
Join @tapswap_bot


Collaboration - @taping_Guru

Last updated 6 days, 5 hours ago