I do want to make use of the old pi, even if it is used as a NameNode, I think it doesn't need much computation resource. I'm new to spark, so the question might be silly, sorry about that.
Even though NameNodes aren't processing data, they still have some CPU and memory requirements (they have to orchestrate the data processing, maintain records of the filesystem, etc.). I saw somewhere that 4GB per node was the recommended minimum. All I know from experience is that 1GB seems to barely work.
Spark sets minimum memory limits and I don't think 256MB is enough to do anything.
unfortunately, Pi-Hole project requires at least 512m memory. My old pi should R.I.P right now, I'll leave it to my children as the first gift form elder generation.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I do want to make use of the old pi, even if it is used as a NameNode, I think it doesn't need much computation resource. I'm new to spark, so the question might be silly, sorry about that.
Even though NameNodes aren't processing data, they still have some CPU and memory requirements (they have to orchestrate the data processing, maintain records of the filesystem, etc.). I saw somewhere that 4GB per node was the recommended minimum. All I know from experience is that 1GB seems to barely work.
Spark sets minimum memory limits and I don't think 256MB is enough to do anything.
okay, the only thing that 256m can do may be running an Nginx reverse proxy in my private cloud or RIP, thanks for that.
Maybe you could turn it into a Pi-Hole?
unfortunately, Pi-Hole project requires at least 512m memory. My old pi should R.I.P right now, I'll leave it to my children as the first gift form elder generation.