We just wrapped up the September ‘24 AI, Machine Learning and Computer Vision Meetup, and if you missed it or want to revisit it, here’s a recap! In this blog post you’ll find the playback recordings, highlights from the presentations and Q&A, as well as the upcoming Meetup schedule so that you can join us at a future event.
First, Thanks for Voting for Your Favorite Charity!
In lieu of swag, we gave Meetup attendees the opportunity to help guide a $200 donation to charitable causes. The charity that received the highest number of votes this month was Heart to Heart International, an organization that ensures quality care is provided equitably in medically under-resourced communities and in disaster situations. We are sending this event’s charitable donation of $200 to Heart to Heart International on behalf of the Meetup members!
Missed the Meetup? No problem. Here are playbacks and talk abstracts from the event.
Reducing Hallucinations in ChatGPT and Similar AI Systems
LLMs are prone to producing hallucinations, largely due to their limited content and knowledge base. One of the most widely used techniques to reduce hallucinations is incorporating external knowledge sources. Among these, using knowledge graphs has shown particularly impressive results in enhancing the accuracy and reliability of the results produced by LLMs. In this talk, we will explore what knowledge graphs are, why they are important, and how to utilize the Neo4j graph database to improve the reliability of LLMs.
Speaker: Abhimanyu Aryan started in VR industry, then worked as an ML Engineer (Vision) for the Indian Air Force and contributed to Julia’s open-source web ecosystem( mostly Genie). Currently, building an AI stealth startup.
Q&A
- What are the distinct advantages of using knowledge graphs over the vector databases for hallucination reduction?
- Do most RAGs use the cypher type of mechanism, or do they use some other type of structure?
- How do knowledge graphs deal with conflicting information?
- How can knowledge graphs help with decision trees
- What's the current state of the art for knowledge graph grounding?
Resource Links
Update: Data-Centric AI Competition on Hugging Face Spaces
Are you ready to challenge the status quo in AI development? Then join Voxel51’s Harpreet Sahota for the latest updates, plus tips and tricks on the first-ever Data-Centric AI competition on Hugging Face Spaces, focusing on the often-overlooked yet crucial aspect of AI: data curation. Learn more about the competition, rules and prizes.
Speaker: Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in RAG, Agents, and Multimodal AI.
Resource Links
It's in the Air Tonight. Sensor Data in RAG
I will do a quick overview of the basics of Vector Databases and Milvus and then dive into a practical example of how to use one as part of an application. I will demonstrate how to consume air quality data and ingest it into Milvus as vectors and scalars. We will then use our vector database of Air Quality readings to feed our LLM and get proper answers to Air Quality questions. I will show you how to all the steps to build a RAG application with Milvus, LangChain, Ollama, Python and Air Quality Reports. Preview the demo on Medium.
Speaker: Tim Spann is a Principal Developer Advocate for Zilliz and Milvus. He works with Milvus, Generative AI, HuggingFace, Python, Big Data, IoT, and Edge AI. Tim has over twelve years of experience with the IoT, big data, distributed computing, messaging, machine learning and streaming technologies.
Q&A
- How is vector search different from a KNN?
- Do you think vector search can be used for something like finding clips where a certain person is present?
- What size of Llama 3.1 are you using in the demo?
- Do you think that a Raspberry Pi would have enough compute to run this demo in real time?
Resource links
- Learn more about Milvus
- GitHub repo used in the presentation
Join the AI, Machine Learning and Data Science Meetup!
The combined membership of the Computer Vision and AI, Machine Learning and Data Science Meetups has grown to over 20,000 members! The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies.
Join one of the 12 Meetup locations closest to your timezone.
- Athens
- Austin
- Bangalore
- Boston
- Chicago
- London
- New York
- Peninsula
- San Francisco
- Seattle
- Silicon Valley
- Toronto
What’s Next?
Up next on Sept 19, 2024 at 8:30 AM PT / 11:30 AM ET , we have five great speakers lined up!
- Interpretable AI Models in Radiology- Dr. Tolga Tasdizen, Electrical and Computer Engineering and the Scientific Computing and Imaging (SCI) Institute at the University of Utah
- Bridging Species with Pixels: Advancing Comparative Computational AI in Veterinary Oncology- Dr. Christopher Pinard, DVM DVSc DACVIM, ANI.ML Health and Ontario Veterinary College, University of Guelph and Dr. Kuan-Chuen, ANI.ML Health
- Deep-Dive: NVIDIA’s VISTA-3D and MedSAM-2 Medical Imaging Models - Daniel Gural, Machine Learning Engineer at Voxel51
- Exploring Instance Imbalance in Medical Semantic Segmentation- Soumya Snigdha Kundu, Ph.D. student at King’s College London
Register for the Zoom here. You can find a complete schedule of upcoming Meetups on the Voxel51 Events page.
Get Involved!
There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these:
- You’d like to speak at an upcoming Meetup
- You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup
- You’d like to co-organize a Meetup
- You’d like to co-sponsor a Meetup
Reach out to Meetup co-organizer Jimmy Guerrero on Meetup.com or ping me over LinkedIn to discuss how to get you plugged in.
—
These Meetups are sponsored by Voxel51, the company behind the open source FiftyOne computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to get started, in just a few minutes.
Top comments (0)