Is that a thing? Honestly I feel like I don't even know how this works. I always look at GitHub and see teams of people with stars up the wazoo and a ton of commits where everything is documented and vetted and is just plug and play. Then I'm over here like "I know most of you like the multiprocessing module in python better, but I use multithreadong because the bots are better suited for concurrency than true parallelism." But it's been so long that I've worked with a team that I'm unsure if anybody would even understand what my code was doing. Then there's the idea of deployment with Docker (which I love btw. Containerization is the balls. No hypervisor = good). But it all relates back to that fear mentioned by the OP of inadequacy and comparison to others etc. Thank you for replying to me though!
Oh and, if you like, I'll put up my basic bot on GitHub. It's the standard build I tend to use as a template for the rest. I call it my Pydra (Hydra built in python, adorable I know). That just had basic proxy cycling, useragent cycling and request throttling etc. Since I'm very new to tensorflow and scikit, I haven't deployed any with the learning algorithms I've trained because I'm not sure they provide clean data yet. I'll link it here later and let me know what you think!
I'm not sure if it's a thing. I live in a world I don't understand, and so I tend to put stuff out there and see what sticks. My real benefit is that the process of putting things out there sharpens my own understanding of what I'm doing and brings the benefits of clearer vision back into my work.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.