loading...
Cover image for A Walk Down Memory Lane - How Technology Has Changed (and stayed the same) over the last 23 years.

A Walk Down Memory Lane - How Technology Has Changed (and stayed the same) over the last 23 years.

pcmichaels profile image Paul Michaels ・3 min read

Back when I started being paid to write computer programs, software was installed using floppy disks, a powerful machine had megabytes (rather than gigabytes) of RAM, Tony Blair had just won an election, and was the new Prime Minister of the UK, replacing John Major, and the first Harry Potter book was published.

Today, things are different: for a start, there are people of working age that don't remember John Major (he was quite easy to forget). More importantly, in tech, we haven't seen floppy disks for well over a decade, and your average phone would run rings around, what at the time would have been considered, a supercomputer.

There were some vague sign-posts of where things were heading, though; in just a couple of years, SETI would arguably start the concept of distributed computing, with Seti at Home. This was around the time that Google started, easily dislodging Yahoo and Lycos from their perches. However, things were still a way off: Mark Zuckerberg was still at school, the best that Microsoft had was Windows '95 - Windows '98 was coming shortly - but they were still working to the 'computer on every desk and home' strategy.

Source control was typically achieved by storing your code in different directories on your local disk, or zipping versions of it and copying it over to a server. Git was yet to be created, and Source Safe was a relatively new concept. In fact, I didn't, personally, see any source control software until around the turn of the millennium, when I was introduced to PVCS

Oddly, though, we seem to be on a circular trajectory (obviously, for things like source control, everyone now uses it, and everyone accepts that it's a good idea to do so), but consider technologies like Razor Components; what we have here is, fundamentally, a piece of software that (can) run on the client machine, but is separated into separate components, which can encapsulate functionality, and be used in multiple projects - a bit like COM.

We used to have copious logs, too - which is another thing that's coming around. If you deploy software to a machine that you can't control, or even access, you need some descent logging; to an extent, that is true of Blazor, but for something like a cloud function, that's doubly true.

Some things, in fact, have hardly changed at all: whilst creating a Windows Service in VB6 didn't exactly look like this, you still got a Windows Service, and you probably had it do the same basic thing.

Finally, we mentioned cloud computing earlier, but what about Microservices. Did Sam Newman really create them 10 years ago? In fact, the concept of having a process that reads, say a directory, or a database, and does something based on whatever is inside it, fulfils pretty much every criteria of a de-coupled micro-service. Obviously, we didn't call them Micro-services at the time: we called them things like "The EDI file monitor", or "Barry's program"; but I'm not sure if you drew an architecture diagram, you'd be able to tell the difference between "Barry's program" and an Azure function, or a Google Cloud Function!

Posted on by:

pcmichaels profile

Paul Michaels

@pcmichaels

I'm a Lead Developer. I've been programming professionally since 1997. I'm interested in finding neat solutions to difficult problems.

Discussion

pic
Editor guide
 

I remember being in high-school, my friend and me "took on the responsibility" of the "computers lab" - which basically meant skipping some of the more boring classes to go play Wolfenstein 3D in that "lab". I also remember his face when the turned to me one day and said "Zohar, I think I've broken the computer with the CD-Rom". Yeah, that's right, one computer in the entire school that had a CD drive... Oh, the good old days :-)
I'm "celebrating" 21 years as a developer, btw.