DEV Community

Discussion on: Building a Raspberry Pi Hadoop / Spark Cluster

Collapse
 
sunnymo13 profile image
SunnyMo

I do want to make use of the old pi, even if it is used as a NameNode, I think it doesn't need much computation resource. I'm new to spark, so the question might be silly, sorry about that.

Thread Thread
 
awwsmm profile image
Andrew (he/him) • Edited

Even though NameNodes aren't processing data, they still have some CPU and memory requirements (they have to orchestrate the data processing, maintain records of the filesystem, etc.). I saw somewhere that 4GB per node was the recommended minimum. All I know from experience is that 1GB seems to barely work.

Spark sets minimum memory limits and I don't think 256MB is enough to do anything.

Thread Thread
 
sunnymo13 profile image
SunnyMo

okay, the only thing that 256m can do may be running an Nginx reverse proxy in my private cloud or RIP, thanks for that.

Thread Thread
 
awwsmm profile image
Andrew (he/him)

Maybe you could turn it into a Pi-Hole?

Thread Thread
 
sunnymo13 profile image
SunnyMo

unfortunately, Pi-Hole project requires at least 512m memory. My old pi should R.I.P right now, I'll leave it to my children as the first gift form elder generation.