News and announcements of the library. No books here.
🇨🇳Official Chinese channel: t.me/zlib_china_official
🌐 https://z-library.sk
https://en.wikipedia.org/wiki/Z-Library
🐦 https://twitter.com/Z_Lib_official
🐘 https://mastodon.social/@Z_Lib_official
Last updated 4 months, 4 weeks ago
Intel slava is a Russian News aggregator who covers Conflicts/Geopolitics and urgent news from around the world.
For paid promotions and feedback contact us at: @CEOofBelarus
Last updated 1 month, 4 weeks ago
💫Welcome to the best book channel of Telegram.
✨Buy ads: https://telega.io/c/BooksHub25
✨Contact admin ➠ @Bookshub_contact_bot
✨ Copyright Disclaimer➠ https://telegra.ph/LEGAL-COPYRIGHT-DISCLAIMER-09-18
ᴇɴᴅ ᴡᴏʀʟᴅꜱ, 2024
onespeg
Links for 2024-06-08
AI:
Eric Schmidt poached talent from Apple, SpaceX, and Google to create AI military drones for Ukraine. https://www.forbes.com/sites/sarahemerson/2024/06/06/eric-schmidt-is-secretly-testing-ai-military-drones-in-a-wealthy-silicon-valley-suburb/ [No paywall: https://archive.is/lkr04]
Scott Aaronson recommends Leopold Aschenbrenner's essay: "With unusual clarity, concreteness, and seriousness...Leopold sets out his vision of how AI is going to transform civilization over the next 5-10 years." https://scottaaronson.blog/?p=8047
Will jailbreaking soon be a solved issue? “We introduce Short Circuiting: the first alignment technique that is adversarially robust. Unlike adversarial training which takes days, short circuits can be inserted in under 20 minutes on a GPU. Unlike input/output filters, short circuited models are deployed as normal models with no additional inference cost.” https://arxiv.org/abs/2406.04313
Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data https://epochai.org/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data
Self-Improving Robust Preference Optimization — “…we derive a practical, but mathematically principled offline algorithm to explicitly teach a model to self-improve and be robust to the choice of the eval task at the same-time!” https://arxiv.org/abs/2406.01660
MatMul-free LLMs: Proposes an implementation that eliminates matrix multiplication operations from LLMs while maintaining performance at billion-parameter scales. https://arxiv.org/abs/2406.02528
Grokfast: significantly reduces training iterations, accelerating the grokking process by 50 times in machine learning models. https://arxiv.org/abs/2405.20233
Buffer of Thoughts: Significant performance improvements over previous SOTA methods: 11% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One. https://github.com/YangLing0818/buffer-of-thought-llm
BitsFusion: Compresses the UNet of Stable Diffusion v1.5 (1.72 GB, FP16) into 1.99 bits (219 MB), achieving a 7.9X compression ratio and even better performance. https://snap-research.github.io/BitsFusion/
YOLOv10, a powerful real-time object detection model, reduces latency by 46% and parameter count by 25% compared to its predecessor. https://github.com/THU-MIG/yolov10
σ-GPTs: A New Approach to Autoregressive Models https://arxiv.org/abs/2404.09562
Google releases new tool to automate Python code optimization. https://labs.google.com/code/transformer
Miscellaneous:
News and announcements of the library. No books here.
🇨🇳Official Chinese channel: t.me/zlib_china_official
🌐 https://z-library.sk
https://en.wikipedia.org/wiki/Z-Library
🐦 https://twitter.com/Z_Lib_official
🐘 https://mastodon.social/@Z_Lib_official
Last updated 4 months, 4 weeks ago
Intel slava is a Russian News aggregator who covers Conflicts/Geopolitics and urgent news from around the world.
For paid promotions and feedback contact us at: @CEOofBelarus
Last updated 1 month, 4 weeks ago
💫Welcome to the best book channel of Telegram.
✨Buy ads: https://telega.io/c/BooksHub25
✨Contact admin ➠ @Bookshub_contact_bot
✨ Copyright Disclaimer➠ https://telegra.ph/LEGAL-COPYRIGHT-DISCLAIMER-09-18