The DevDiscuss Podcast begins with an interview and ends with commentary from listeners — and we like to feature the actual voices from our community!
To inform an upcoming episode of the show, we'd like to know...
What are your biggest performance engineering hacks?
For your chance to hear your actual comments on an upcoming episode, answer the question above by:
Calling our Google Voice at +1 (929)500-1513 and leave a message 📞
Sending a voice memo to firstname.lastname@example.org 🎙
OR, leaving a comment here (we'll read your response aloud for you) 🗣
Please send in your recordings by Wednesday, August 4th at 1 PM, ET (5 PM UTC, 10 AM PT)
Voice recordings will be given priority placement 😉
Catch up on recent episodes of the show here. The new season premieres soon 👀
Latest comments (4)
To be honest (and a bit ashamed) it was just a macro on a Rumba terminal that automated the payment routine on my first job some decades ago. It was truly a hack, for my job was quite apart from coding and neither was I authorized to mess with the mainframe nor my boss knew it. Later on I told him why we got such an extra capacity for the small team and he had my back.
Payments were a problem. Several little invoices from many providers every week and a slow mainframe connection took 3,5 men days/week to be processed, every week, from a team of 3 people. I made some macros for the accountancy codes, several screens and provider's codes and the job was then successfully done in 1h.
It's good for me to remember the importance of front-end needs as well as good basic usability. The very beginning of what is now UX.
Streaming Large Data from an E-Commerce system was a life hack I would say. We could just handle any amount of load that was given, downside was the delay when encountering a rather huge data set. However we are trying to make it better by scaling vertically :P
Ages ago, worked for a company app with a SQL Server back end. Report queries were slow. simple fast thing that worked 90% of the time was to add a GROUP BY clause to the end that included every output column (since we had a unique value in each row it didn't change the results) . This improved performance of 90% of our report queries. I was not yet experienced enough to trace back the root of the issues.
The command line, create aliases, scripts and bash functions for your most used tools and workflows, automate as much as possible.
Have a microservice based app with multiple repos and want to run all the containers,db, server, webapp etc. in one go and have a look at their logs as well?
Easy, just setup tmuxp with the correct configs and boom, you have everything ready to go in one go.
And that's just scratching the surface :)