?? ??? ?? ????? ?
We comply with Telegram's guidelines:
- No financial advice or scams
- Ethical and legal content only
- Respectful community
Join us for market updates, airdrops, and crypto education!
Last updated 5 months, 1 week ago
[ We are not the first, we try to be the best ]
Last updated 7 months, 3 weeks ago
FAST MTPROTO PROXIES FOR TELEGRAM
ads : @IR_proxi_sale
Last updated 3 months, 2 weeks ago
🔥 Open Position: Research Intern/Collaborator – Virtual Staining of Histopathology Images
🔸 Join our CVPR conference paper project on Virtual Staining!
We are looking for dedicated researchers, with a preference for local candidates, as this role requires 20 hrs/week of in-person collaboration.
*🔸 Technical Requirements:
💠 Strong English reading & writing skills for technical documentation.
💠 Hands-on experience with:
🌀 PyTorch & deep learning fundamentals
🌀 Running & troubleshooting GitHub repositories
🌀 Exposure to generative models (GANs, diffusion models) is a plus!
🌀* Ability to write clean, organized Python code
*🔸 Non-Technical Requirements:
💠 Commitment to 20 hrs/week in-person work at our lab
💠 Persistence in solving technical challenges (e.g., debugging model training)
💠 Strong teamwork & communication skills
💠* Curiosity about medical imaging & generative AI
*🔸 Why Join?
💠 Mentorship from Dr. Rohban & the RIML Lab team
💠 Hands-on experience with generative models (GANs/Diffusion) for medical imaging
💠 Work with collaborative coding (GitHub) & Linux-based workflows
💠* Opportunity for CVPR-tier co-authorship & strong recommendation letters
*📩 How to Apply
Please send the following to [email protected]:
💠 Your CV
💠 GitHub/code samples
💠 A brief note explaining:
🌀 Your interest in generative models for healthcare
🌀* Your commitment to 20 hrs/week of in-person work
🚀 Join Peter Stone’s Talk at Sharif University of Technology
*🎙 Title: *Multiagent RL: Cooperation and Competition
👨🏫 Speaker: Peter Stone (Professor of Computer Science, University of Texas at Austin)
📅 Date: Thursday (Feb 27, 2025)
🕗 Time: 3:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/M4QxTUWimGyvUmPv7
🎥 فیلم جلسه اول درس System 2
🔸 موضوع: Introduction & Motivation
🔸 مدرسین: دکتر رهبان و آقای سمیعی
🔸 تاریخ: ۲۱ بهمن ۱۴۰۳
🔸لینک یوتیوب
🔸 لینک آپارات
با سلام. اسلایدهای ارائه هفته پژوهش در مورد مقاله نوریپس پذیرفته شده از RIML خدمت عزیزان تقدیم میشود. همینطور در این رشته توییت توضیحاتی در مورد مقاله دادهام: https://x.com/MhRohban/status/1867803097596338499
? Compositional Learning Journal Club
Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.
*✅ This Week's Presentation:*
*? Title:* Counting Understanding in Visoin Lanugate Models
*? Presenter:* Arash Marioriyad
*? Abstract:*Counting-related challenges represent some of the most significant compositional understanding failure modes in vision-language models (VLMs) such as CLIP. While humans, even in early stages of development, readily generalize over numerical concepts, these models often struggle to accurately interpret numbers beyond three, with the difficulty intensifying as the numerical value increases. In this presentation, we explore the counting-related limitations of VLMs and examine the proposed solutions within the field to address these issues.
*? Papers:*- Teaching CLIP to Count to Ten (ICCV, 2023)
- CLIP-Count: Towards Text-Guided Zero-Shot Object Counting (ACM-MM, 2023)
Session Details:
- *? Date: Sunday
- *? Time: 5:00 - 6:00 PM
- ? Location:** Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
? Open Research Position: Visual Anomaly Detection
We announce that there is an open research position in the RIML lab at Sharif University of Technology, supervised by Dr. Rohban.
? Project Description:
Industrial inspection and quality control are among the most prominent applications of visual anomaly detection. In this context, the model is given a training set of solely normal samples to learn their distribution. During inference, any sample that deviates from this established normal distribution, should be recognized as an anomaly.
This project aims to improve the capabilities of existing models, allowing them to detect intricate anomalies that extend beyond conventional defects.
Introductory Paper:
Deep Industrial Image Anomaly Detection: A Survey
Requirements:
- Good understanding of deep learning concepts
- Fluency in Python, PyTorch
- Willingness to dedicate significant time
Submit your application here:
Application Form
Application Deadline:
2024/11/22 (23:59 UTC+3:30)
If you have any questions, contact:
@sehbeygi79
?Open Position: Visual Compositional Generation Research ?
We are excited to announce an open research position for a project under Dr. Rohban at the RIML Lab (Sharif University of Technology). The project focuses on improving text-to-image generation in diffusion-based models by addressing compositional challenges.
? Project Description:
Large-scale diffusion-based models excel at text-to-image (T2I) synthesis, but still face issues like object missing and improper attribute binding. This project aims to study and resolve these compositional failures to improve the quality of T2I models.
Key Papers:
- T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional T2I Generation
- Attend-and-Excite: Attention-Based Semantic Guidance for T2I Diffusion Models
- If at First You Don’t Succeed, Try, Try Again: Faithful Diffusion-based Text-to-Image Generation by Selection
- ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
? Requirements:
- Must: PyTorch, Deep Learning,
- Recommended: Transformers and Diffusion Models.
- Able to dedicate significant time to the project.
? Important Dates:
- Application Deadline: 2024/10/12 (23:59 UTC+3:30)
? Apply here:
Application Form
For questions:
? [email protected]
? @amirkasaei
? Compositional Learning Journal Club
Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.
✅ This Week's Presentation:
? Title: A semiotic methodology for assessing the compositional effectiveness of generative text-to-image models
? Presenter: Amir Kasaei
? Abstract:
A new methodology for evaluating text-to-image generation models is being proposed, addressing limitations in current evaluation techniques. Existing methods, which use metrics such as fidelity and CLIPScore, often combine criteria like position, action, and photorealism in their assessments. This new approach adapts model analysis from visual semiotics, establishing distinct visual composition criteria. It highlights three key dimensions: plastic categories, multimodal translation, and enunciation, each with specific sub-criteria. The methodology is tested on Midjourney and DALL·E, providing a structured framework that can be used for future quantitative analyses of generated images.
Session Details:
- ? Date: Sunday
- ? Time: 5:00 - 6:00 PM
- ? Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
? Compositional Learning Journal Club
Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.
✅ This Week's Presentation:
? Presenter: Amir Kasaei
? Abstract:
Recent advancements in text-conditioned image generation, particularly through latent diffusion models, have achieved significant progress. However, as text complexity increases, these models often struggle to accurately capture the semantics of prompts, and existing tools like CLIP frequently fail to detect these misalignments.
This presentation introduces a Decompositional-Alignment-Score, which breaks down complex prompts into individual assertions and evaluates their alignment with generated images using a visual question answering (VQA) model. These scores are then combined to produce a final alignment score. Experimental results show this method aligns better with human judgments compared to traditional CLIP and BLIP scores. Moreover, it enables an iterative process that improves text-to-image alignment by 8.7% over previous methods.
This approach not only enhances evaluation but also provides actionable feedback for generating more accurate images from complex textual inputs.
Session Details:
- ? Date: Sunday
- ? Time: 2:00 - 3:00 PM
- ? Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
?? ??? ?? ????? ?
We comply with Telegram's guidelines:
- No financial advice or scams
- Ethical and legal content only
- Respectful community
Join us for market updates, airdrops, and crypto education!
Last updated 5 months, 1 week ago
[ We are not the first, we try to be the best ]
Last updated 7 months, 3 weeks ago
FAST MTPROTO PROXIES FOR TELEGRAM
ads : @IR_proxi_sale
Last updated 3 months, 2 weeks ago