Google has recently taken two steps forward and one step back when it comes to domain security. On the one hand, Google has been pushing for increased adoption and normalisation of HTTPS. On the other hand, the company is now stripping “trivial” subdomains, presenting users with inaccurate information about the precise webpage they are visiting by hiding “www” and “m” — for mobile — subdomains from the URL bar.
Regarding Google’s HTTPS efforts, these have been largely laudable. To accomplish the normalisation of HTTPS, among other things, Google now factors whether a website has a HTTPS certificate in its search rankings, supports the Let’s Encrypt initiative to provide HTTPS certificates readily and free-of-charge, and is gradually shifting toward explicitly marking websites serving HTTP content as “not secure,” rather than marking websites that serve HTTPS as “secure”. Despite these advances in domain security, not all of the company’s efforts are furthering the user interest.
Unlike their HTTPS efforts, the decision to hide www and m subdomains is a clear setback in terms of user security. While the www and m subdomains are frequently extraneous, they are not always so, and the browser should not assume them to be. To illustrate, consider social media websites that follow the convention of allowing users to register their own subdomains. A malicious actor could register www.example.com or m.example.com to phish a user. This is especially problematic for web hosts that allow their users to register their own subdomains. While in most cases www subdomains will be reserved, the m subdomain may not be, allowing a malicious actor to masquerade as the domain owner as a result of this change.
Don’t get me wrong, I am no fan of the www subdomain. I am defending the presentation of www in the URL bar despite being an early supporter of the No-WWW movement. I learned about this approach when I was beginning the process of teaching myself web development back in 2003. As the now-defunct No-WWW website put it:
By default, all popular Web browsers assume the HTTP protocol. In doing so, the software prepends the ‘http://’ onto the requested URL and automatically connect to the HTTP server on port 80. Why then do many servers require their websites to communicate through the www subdomain? Mail servers do not require you to send emails to recipient@mail.domain.com. Likewise, web servers should allow access to their pages through the main domain unless a particular subdomain is required.
Succinctly, use of the www subdomain is redundant and time consuming to communicate. The internet, media, and society are all better off without it.
Back then, I found this argument to be convincing and still do today. I think in almost every case web developers should redirect traffic from the www subdomain to the actual domain itself. On the more extreme end, although I do not personally agree with this approach, some developers choose not to recognize the www subdomain altogether. In No-WWW’s parlance, these are Class C domains, and attempts to visit www.example.com on such sites will simply be met with an error message.
Personally, I find this approach to be a bit too much. Simply put, it is not particularly user-centric. In my own view, I think redirecting users, such as by using an .htaccess rewrite, is the ideal approach. That said, there are some good justifications out there for avoiding the creation of extraneous subdomains, despite their popularity.
Regardless of my own aversion toward the www subdomain, the decision of how to present one’s website should be strictly left to the web developer and site owner. Those who prefer to redirect users from example.com to www.example.com should feel free to do so and have the URL bar properly reflect the site’s URL. Furthermore, unless the DNS records of the domain and the www sub-domain are precisely the same, the browser is not justified in making assumptions about their content.
In making such assumptions, the browser adds an extra layer of confusion and insecurity by misleading users about precisely which web page they are currently visiting. With phishers using increasingly advanced techniques to scam unwitting users, hiding valuable information from the URL bar can prove dangerous. With this change, a subdomain like www.m.www.example.com simply appears as example.com or www.hello.m.example.com appears as hello.example.com, providing potential avenues for phishing unsuspecting users.
To prevent such situations from ever arising and to do right by their users, Google should not simply copy Apple, which follows a similar practice in their Safari browser. As we have seen with the removal of the headphone jack in high-end smartphones, blindly copying Apple is not necessarily always the best route. To remedy this, it is helpful to take a lesson from Google’s own approach to HTTPS. Here, Google strikes the ideal balance by both fully and accurately informing the user while also providing a nudge for developers to modify their practices. By following similar methods with the www and m subdomains, Google can likewise nudge developers down a desired path without presenting their users with misleading — and potentially dangerous — information.
What do you think? Is Google doing the right thing by simplifying the user interface? Or is it going too far in making changes that are likely to confuse users and provide malicious actors yet another avenue of attack?
Top comments (3)
Agree to pretty much all of what you said but I'm struggling at one part, maybe you can help me out a little haha.
If I understand you correctly, we'd say dev.to allows us to create subdomains. So now I create m.dev.to and send out that link in the hopes someone clicks on it. How would that be malicious? The site would still need me to allow to run scripts of any sort, right?
Am I missing the obvious here?
You don't necessarily need to be able to run scripts to engage in malicious activities. Keylogging can be done using just HTML and CSS (though it requires special processing server-side to work), and it's absolutely possible to do any number of nasty things with carefully crafted text or embedded objects (images, videos, etc). The likelihood of persistently infecting a user's computer through such an attack is really low, but it's still theoretically possible, and it's absolutely possible to do drive-by attacks that crash the browser or possibly even the whole system.
Welp, I completely forgot custom content (images, videos, links, ...). I guess commenting when you just got up really isn't the best idea. Thanks for clearing it up though, I'm just gonna run head first into a wall because I didn't think of it when writing the comment