Or is this not even a useful way of thinking in the NoSQL paradigm? Regardless, how would you go about mapping large amounts of users to large amounts of groups, or handling likes on posts, or handling tags?
I'm super curious to hear your thoughts!
Or is this not even a useful way of thinking in the NoSQL paradigm? Regardless, how would you go about mapping large amounts of users to large amounts of groups, or handling likes on posts, or handling tags?
I'm super curious to hear your thoughts!
For further actions, you may consider blocking this person and/or reporting abuse
Lucas Chitolina -
Chris Jarvis -
Alexander Shagov -
Jess Lee -
Top comments (8)
That's a quintessential relational data model, and document databases really aren't designed for that use case. It can be done, as Ankur demonstrates, but awkwardly and without the safeguards of foreign key constraints.
Document dbs shine when you have sets of hierarchies with few to no interrelationships. In other cases, it's like using a wrench to drive nails.
Hi, I see Tinder uses MongoDB and they use a M-M relationship with the swiping feature. How do you think that would work? Also, what DB do you recommend for M-M?
The ONLY truly viable way to ensure the integrity of data in Many to Many relationships is to use database engines which are designed to correctly handle foreign key constraints, RDBMs are the typical way to do this, many Modern RDBMs also handle JSON data extremely well and can store Documents in JSON columns whilst still maintaining full referential integrity. (Yes there are some who would suggest doing it in code, most who do so have never had to fixup the data messes this can and often does cause).
PostgreSQL (Amazon Redshift), MySQL 8 (Amazon RDS / Aurora MySQL 8), SQL Server (Amazon RDS SQL Server / Azure SQL Database), these three are very widely used and each have JSON functions which enable many of the functions of NOSQL databases to be implemented within an RDBMs.
Let's take students and course example..
You can create
Student{
_id:ID
name: String
courses: Array[ID]
}
Course{
_id:ID
name: String
}
Would it be slow to look up which students were in a particular course then, if there are, say, 100k students and 100k courses?
Let say if you have N students and M courses and if we store course Id in sorted order then the time complexity should be N*log(M).
We can also store list of student IDs in course document. Then the time complexity should be N or o(1) as mongo creates indexes on id.
Hi,
"We can also store list of student IDs in course document. Then the time complexity should be N or o(1) as mongo creates indexes on id."
=> This makes the insert function more complicated ? If you want to add 1 students with 10 courses, you will must insert user_id to courses.student_list (apply with all of 10 courses) and insert 10 course_id to students.course.
Everything has its price, sometimes you have to accept the complexity?
One of the biggest issues I found with this IDs approach is the MongoDB pipeline, as long as you don't have to do complex pipeline lookup on that many IDs.
There is a memory threshold on aggregates I guess that's why mongoose uses plain queries on the populate function.